Category: Microsoft
Category Archives: Microsoft
Windows Autopatch: Auto-remediation with PowerShell scripts
Windows Autopatch deploys and continuously monitors Microsoft Intune policies to enrolled tenants. Policy conflicts can occur when there are two or more policies in the tenant and might impact Windows updates in Windows Autopatch. Let’s take a look at what causes policy conflicts and, more importantly, how you can easily resolve them with PowerShell scripts.
Important: This solution is for you if you currently use Microsoft Intune and don’t use third-party application patching solutions with Microsoft Configuration Manager.
The origin of Windows update policy conflicts
Policy conflicts can prevent successful deployment of Windows quality and feature updates. These issues are common in environments that rely on Configuration Manager and Group Policy Objects (GPOs).
Have you transitioned to modern management through co-management and shifted the control slider to Microsoft Intune? This step is crucial for enabling Windows Autopatch. However, legacy configurations can leave registry artifacts that disrupt the service operations.
When you use Configuration Manager, it’s important you don’t enable software update client settings that might conflict with Autopatch policies. If you currently allow or plan to allow Microsoft 365 app updates for Windows Autopatch–enrolled devices, the best practice is to disable Office updates via Configuration Manager. Alternatively, if you don’t use Configuration Manager to distribute third-party application updates, you can disable the software update client entirely. Please make sure that you remove any existing client configuration that can conflict with Autopatch, unless you’re using a third-party application patching solution. Review how in Client settings in Configuration Manager.
However, even well-configured environments might retain registry artifacts in the registry. Take these primary actions, for which we provide details below, to address update-policy-related registry artifacts:
Copy the detection script
Copy the remediation script.
Upload and deploy the scripts in Microsoft Intune
Monitor script execution.
Collect log files.
Note: This solution is based on recommendations in our official documentation, such as Conflicting configurations.
Copy the detection script
This PowerShell script performs several operations to detect and log specific Windows Update policy settings that could prevent correct update deployments. The detection script:
Defines log location and name. The script sets up variables for the path ($TranscriptPath) and name ($TranscriptName) of the log file where it will store the output. These point to a log file named “AutoPatchDetection.log” within the following folder: C:ProgramDataMicrosoftIntuneManagementExtensionLogs.
Creates log directory (if necessary). The script attempts to create the directory specified by $TranscriptPath if it doesn’t already exist, using the -Force parameter to overwrite any existing files without prompting for confirmation.
Stops orphaned transcripts. The script checks for any leftover PowerShell logging sessions and stops them to prevent interference. If there’s no active transcript session, it catches the resulting error and does nothing, preventing the script from stopping.
Starts transcription. The script begins a new transcription session. It saves the output to the specified log file and appends to it if it already exists.
Creates registry key array. The script creates an array ($regkeys) to hold objects representing specific registry keys related to Windows Update policies. Each object contains a name and a path indicating the location of the registry key and the value that it’s looking for.
Populates array with key information. The script adds objects to the $regkeys array representing the following registry keys:
DoNotConnectToWindowsUpdateInternetLocations
DisableWindowsUpdateAccess
NoAutoUpdate
Important: These keys are associated with policies that might limit the functionality of Windows updates.
Checks registry settings. The script iterates over each object in the $regkeys array, checking if the specified registry key exists and contains the specified property (expected policy setting). If it finds a key that matches, the script raises a flag $RemediationNeeded. This flag indicates that there are registry settings that need correction and have been logged.
Logs and exits based on findings.
If any incorrect settings were detected ($RemediationNeeded -eq $true), the script logs the issue, ends the logging session, and exits with code 1. Code 1 indicates that an error was found.
If no problems are found, the script logs that the registry settings are correct, stops logging, and exits with code 0. Code 0 indicates that everything is in order.
Tip: You can either copy the code presented here, or, if you want to engage with your peers, get it from GitHub.
$TranscriptPath = “C:ProgramDataMicrosoftIntuneManagementExtensionLogs”
$TranscriptName = “AutoPatchDetection.log”
new-item $TranscriptPath -ItemType Directory -Force
# stopping orphaned transcripts
try
{
stop-transcript|out-null
}
catch [System.InvalidOperationException]
{}
Start-Transcript -Path $TranscriptPath$TranscriptName -Append
# initialize the array
[PsObject[]]$regkeys = @()
# populate the array with each object
$regkeys += [PsObject]@{ Name = “DoNotConnectToWindowsUpdateInternetLocations”; path = “HKLM:SOFTWAREPoliciesMicrosoftWindowsWindowsUpdate”}
$regkeys += [PsObject]@{ Name = “DisableWindowsUpdateAccess”; path = “HKLM:SOFTWAREPoliciesMicrosoftWindowsWindowsUpdate”}
$regkeys += [PsObject]@{ Name = “NoAutoUpdate”; path = “HKLM:SOFTWAREPoliciesMicrosoftWindowsWindowsUpdateAU”}
foreach ($setting in $regkeys)
{
write-host “checking $($setting.name)”
if((Get-Item $setting.path -ErrorAction Ignore).Property -contains $setting.name)
{
write-host “$($setting.name) is not correct”
$RemediationNeeded = $true
}
}
if ($RemediationNeeded -eq $true)
{
write-host “registry settings are incorrect”
Stop-Transcript
exit 1
}
else
{
write-host “registry settings are correct”
Stop-Transcript
exit 0
}
Copy the remediation script
You can remediate Windows Update policy conflicts with this PowerShell script. It removes specific registry keys that can prevent updates from being deployed successfully. The remediation script:
Sets up log file. It establishes a file name and a directory where the script’s output will be recorded for logging purposes: C:ProgramDataMicrosoftIntuneManagementExtensionLogsAutoPatchRemediation.log.
Creates log directory (if necessary). The script ensures the existence of the specified directory for the log file. If it doesn’t already exist, the script creates it using the -Force parameter to override any potential issues without prompting.
Stops orphaned transcripts. It attempts to stop any transcription (logging) sessions that might have been left running previously. If no such session is active, and an error occurs, it catches the error to prevent the script from failing at this point.
Starts new transcription. It begins logging the script’s output to the specified log file, appending to it if the file already exists. This process records all actions taken by the script.
Creates registry key array. It creates an array to hold objects, each representing a specific registry key related to Windows Update policies that might cause conflicts.
Populates array with target keys. The script adds several objects to this array, each specifying the name and path of a registry key that potentially conflicts with Windows updates. These keys include:
DoNotConnectToWindowsUpdateInternetLocations
DisableWindowsUpdateAccess
NoAutoUpdate
Remediates conflicts. For each registry key object in the array, the script checks if the specified key exists and contains the property (name) indicated.
If the property exists and indicates a potential policy conflict, the script removes this property from the registry to remediate the conflict. It then logs this action.
If the property does not exist, it logs that the specific setting was not found. In that case, no action is needed for that key.
Stops transcription. Once all specified registry keys have been checked and remediated as necessary, the script stops logging its actions and closes the log file.
Tip: You can either copy the code presented here, or, if you want to engage with your peers, get it from GitHub.
$TranscriptPath = “C:ProgramDataMicrosoftIntuneManagementExtensionLogs”
$TranscriptName = “AutoPatchRemediation.log”
new-item $TranscriptPath -ItemType Directory -Force
# stopping orphaned transcripts
try
{
stop-transcript|out-null
}
catch [System.InvalidOperationException]
{}
Start-Transcript -Path $TranscriptPath$TranscriptName -Append
# initialize the array
[PsObject[]]$regkeys = @()
# populate the array with each object
$regkeys += [PsObject]@{ Name = “DoNotConnectToWindowsUpdateInternetLocations”; path = “HKLM:SOFTWAREPoliciesMicrosoftWindowsWindowsUpdate”}
$regkeys += [PsObject]@{ Name = “DisableWindowsUpdateAccess”; path = “HKLM:SOFTWAREPoliciesMicrosoftWindowsWindowsUpdate”}
$regkeys += [PsObject]@{ Name = “NoAutoUpdate”; path = “HKLM:SOFTWAREPoliciesMicrosoftWindowsWindowsUpdateAU”}
foreach ($setting in $regkeys)
{
write-host “checking $($setting.name)”
if((Get-Item $setting.path -ErrorAction Ignore).Property -contains $setting.name)
{
write-host “remediating $($setting.name)”
Remove-ItemProperty -Path $setting.path -Name $($setting.name)
}
else
{
write-host “$($setting.name) was not found”
}
}
Stop-Transcript
Upload and deploy the scripts in Microsoft Intune
So, let’s execute our developed solution. First, we’ll deploy the detection and remediation scripts through Microsoft Intune. Here’s where you can ensure precise configuration adjustments, resolve policy conflicts, and enhance Windows update deployments with Windows Autopatch.
Creating a new remediation script
Sign in to https://intune.microsoft.com.
Navigate to Devices.
Select Scripts and remediations. Out of the available script options, select Remediations.
Select Create to start creating a new custom script.
To Create custom script, you’ll complete all required information step by step in the wizard as shown below.
Using the wizard to create a custom script
Let’s walk through the wizard to create a custom script based on our recommendations.
In the Basics tab, give your script a name and fill out the following details:
Name: Enter the name of your remediation script. For example, you can call it “Windows Autopatch – Auto-remediation policies.”
Description (optional): Enter a description with helpful details about this script.
Publisher (automatic): Your name is automatically filled in, but you can change it.
Version: Give your script a version number. This helps you track changes and ensures consistency across deployments.
In the Settings tab, add detection and remediation scripts to the solution:
Add detection script: Select the folder icon to locate your detection script and open it. This will add the script.
Add remediation script: Select the folder icon to locate the remediation script and open it. This will add the script.
Make sure to keep the default settings of “No” on all the following script behaviors. (Tip: Move down the page if you don’t see these options right away.)
Run this script using the logged-on credentials: No
Enforce script signature check: No
Run script in 64-bit PowerShell: No
In the Scope tags tab, select your scope tags if you wish to use them.
In the Assignment tab, configure the deployment options as follows:
Assign to: You can ignore this option for now.
Assignment group: Select the “Windows Autopatch – Devices All” group. That will ensure that you target all devices registered in the Windows Autopatch service.
Schedule: Select Daily to edit the schedule for your remediation. You have multiple options to run the script. You can run it once, hourly, or daily. You can choose if and how often to repeat it. You can also choose a start time. Note: We recommend running this script every hour to remediate any drifts.
Filter: Optionally, use filters to assign a policy based on the rules that you created earlier. We’re not using filters in this guide.
Filter mode: Optionally, include or exclude the groups that receive the filter assignment.
Select Next to review your settings and create your remediation script.
Select Create. If all goes as planned, you’ll now have your remediation script created and assigned to all your Windows Autopatch devices.
Monitor script execution
After deployment, use Intune to monitor the execution status and outcomes of your scripts through the Device status page within Proactive remediation script.
If you select the script package name in Remediations, you’ll get some information about how your script package is performing
If you’d like to have more insights into the Detection status and Remediations status metrics, check the Overview page.
Collect log files
We have some intelligence built into our scripts. It’s contained in log files that are being saved in the Microsoft Intune Management Extension (IME) Log folder.
Important: The generated log files are stored in the IME folder C:ProgramDataMicrosoftIntuneManagementExtensionLogs. This simplifies the diagnostic process and allows for easy collection through Intune.
Use the Collect diagnostics function in Intune to obtain these log files produced by the detection and remediation scripts. Here are the steps to collect and read the logs:
From https://intune.microsoft.com, navigate to Devices.
Go to All devices.
Enter your device name in the search bar.
Select the device name to view the device’s information.
Select Collect diagnostics to ask Intune to gather the diagnostic data from the device.
Once the diagnostic files are uploaded, the Device action status changes from “Pending” to “Complete.” (Tip: You might want to hit F5 a couple times to speed up the refresh in the console.) If you wish, you can download the zip file with all the log files from the Device diagnostics page for each device.
Open the zip file and search for the IntuneManagementExtension_Logs folder. That folder contains our two log files created by the remediation script. If no remediation was required, you’ll have only the detection log.
If you want to know if the remediations were executed, search for “remediating” in the autopatchremediation.log file. You’ll notice that every check is listed in the log. Whenever a drift is detected, it will be flagged for remediation by the detection script.
Final considerations
By following these guidelines, you can streamline update deployments and maintain system integrity.
Important: Make sure to test this in your environment on a small set of devices before pushing it out.
In the complex ecosystem of modern IT environments, ensuring the smooth deployment of Windows updates is critical. With these auto-remediation scripts you can resolve policy conflicts and maintain system performance more effectively with Windows Autopatch.
Continue the conversation. Find best practices. Bookmark the Windows Tech Community, then follow us @MSWindowsITPro on X and on LinkedIn. Looking for support? Visit Windows on Microsoft Q&A.
Microsoft Tech Community – Latest Blogs –Read More
Cell Displaying Wrong
For many years I have linked cells from the same page, or separate tabs within a document, or even from document to document, never with any issues. But now I am trying to link a text field cell to another text field cell in a separate tab within the the same document. The tab with the info is just a persons name which I want to have display in another page.
I have tried everything I can find which toggles the display of cells from formula to the results of the formula, and no matter what I do it ONLY show’s the formula and not the results of the formula.
Now as I’ve been working trying to fix this issue all cells anywhere in the document now are doing this. Be it same page, different page / tab, or separate document!
For many years I have linked cells from the same page, or separate tabs within a document, or even from document to document, never with any issues. But now I am trying to link a text field cell to another text field cell in a separate tab within the the same document. The tab with the info is just a persons name which I want to have display in another page. I have tried everything I can find which toggles the display of cells from formula to the results of the formula, and no matter what I do it ONLY show’s the formula and not the results of the formula. Now as I’ve been working trying to fix this issue all cells anywhere in the document now are doing this. Be it same page, different page / tab, or separate document! Read More
Making Attachments Mandatory for List Items on Power Apps
Hello,
I am trying to make newly added list items with attachments mandatory and have written the following formula for a customized Power Apps form.
However, when I attempt to use this form, it returns the following error, and I am unable to save a new item.
Does anyone have an idea about what is wrong?
Thnx in advance!
Hello, I am trying to make newly added list items with attachments mandatory and have written the following formula for a customized Power Apps form. However, when I attempt to use this form, it returns the following error, and I am unable to save a new item. Does anyone have an idea about what is wrong? Thnx in advance! Read More
Can I send a survey to multiple people w different data pre-filled?
I am trying to get feedback from our hiring managers for our recruiters. Can I use the same Form for the survey, but update the job and recruiter before sending to a specific hiring manager. Is there a way to do this (maybe with Power Automate?) and have all results in the same Excel sheet?
I am trying to get feedback from our hiring managers for our recruiters. Can I use the same Form for the survey, but update the job and recruiter before sending to a specific hiring manager. Is there a way to do this (maybe with Power Automate?) and have all results in the same Excel sheet? Read More
Find Disk IOPs via PowerShell (What metric to use)
Hello:
1. Can someone please advise what metric should I use to find IOPs (let’s say for Read)?
I’m going to run something like below, but not sure what Metric to use:
Hello: 1. Can someone please advise what metric should I use to find IOPs (let’s say for Read)? I’m going to run something like below, but not sure what Metric to use: Get-AzMetric -ResourceId $resourceId -TimeGrain 6:00:00 -StartTime $st30 -EndTime $et -DetailedOutput -MetricNames “DiskReadIOPS???” 2. In additional, for the future, is there a way to see/find from Get-AzMetric all available Metrics for specific Resource? Thank you! Read More
Copilot Prompt Limit Difference Between Licenses
Hello,
Based on a user’s M365/O365 license, are prompt character limits changed for Copilot for M365 prompts? For example, a user assigned a Business Basic license has a 2,000-character limit for prompts, and a user assigned an Office 365 E3 license has a 4,000 character-limit for prompts. Why is this the case? Does licensing change anything with prompt limits?
Thank you
Hello,Based on a user’s M365/O365 license, are prompt character limits changed for Copilot for M365 prompts? For example, a user assigned a Business Basic license has a 2,000-character limit for prompts, and a user assigned an Office 365 E3 license has a 4,000 character-limit for prompts. Why is this the case? Does licensing change anything with prompt limits?Thank you Read More
Windows Licensing for Startups
Our company recently joined the Microsoft for Startups program. We understand that this gives us access to a lot of Azure resources but we are building a physical product that we want to base on a Windows LTSC OS. I can’t find any information about whether or not the Startups program includes any OS licensing benefits. Does anyone have any insight?
Our company recently joined the Microsoft for Startups program. We understand that this gives us access to a lot of Azure resources but we are building a physical product that we want to base on a Windows LTSC OS. I can’t find any information about whether or not the Startups program includes any OS licensing benefits. Does anyone have any insight? Read More
Adoption Masterminds September Meetup
Excited to share the registration link for the Adoption Masterminds September meetup: https://events.teams.microsoft.com/event/03b3e79b-99c4-4c52-b19d-844800f3c211@8b3dd73e-4e72-4679-b191-56da1588712b
Join us as Caitlin O’Kane shares her experience setting up and running a Digital Ambassador Network at her company!!! Caitlin and I met and she showed me around and I left full of ideas for my own Champions Program, and that is our goal for this session!!!
Come ready to hear, share, and explore together!!!
Excited to share the registration link for the Adoption Masterminds September meetup: https://events.teams.microsoft.com/event/03b3e79b-99c4-4c52-b19d-844800f3c211@8b3dd73e-4e72-4679-b191-56da1588712b Join us as Caitlin O’Kane shares her experience setting up and running a Digital Ambassador Network at her company!!! Caitlin and I met and she showed me around and I left full of ideas for my own Champions Program, and that is our goal for this session!!! Come ready to hear, share, and explore together!!! Read More
Stuck at Employment verification rejection
Tried to update Developer account info and always got rejected at “Employment verification” step.
“Fix now” button just reload the page and not allow to upload docs.
Tried in Edge, Chrome, Safari.
@JillArmourMicrosoft Tried to update Developer account info and always got rejected at “Employment verification” step.”Fix now” button just reload the page and not allow to upload docs.Tried in Edge, Chrome, Safari. Read More
Restrict only use Safari, Edge and Firefox browsers on iPAD
How do I go about restricting only these 3 browsers to be used for iPAD? Is there an Intune policies that can be configured?
How do I go about restricting only these 3 browsers to be used for iPAD? Is there an Intune policies that can be configured? Read More
Alert: Compliance Manager Default Alert Policy
Hi community,
I received this email, can anyone tell me more about this and if I should take any action?
Currently all my privileged admin users have a conditional access policy to force them to set up MFA.
Hi community,I received this email, can anyone tell me more about this and if I should take any action?Currently all my privileged admin users have a conditional access policy to force them to set up MFA. Read More
Effortlessly Migrate Azure VMs between zones
For various reasons you might come across a situation when you need to migrate your Azure VMs from one zone to another. Migrating Azure VMs between zones can be a daunting and time-consuming task, especially when done manually. I’m excited to share the Python tool I’ve developed that simplifies the process of migrating VMs along with its data disks across zones within Microsoft Azure.
The Challenge of VM migration
Migrating VMs specially along with its data disks, between zones can be a daunting task. It involves creating snapshots, creating disks from the snapshots in the new zone, creating VMs using those disks and ensuring that all configurations are correctly set up in the new zone. This process is not only time-consuming but also prone to errors if done manually.
Introducing the Azure VM Migration Tool
To address this challenge, I’ve created a Python script that automates the entire migration process. The tool takes snapshots of the disks attached to a VM in one Azure zone, create new disks using those snapshots in another zone, and create new VM using those disks in another zone and handling all the necessary steps in between. All you need to do is to provide necessary information in a CSV file and it will take care of all the complex tasks.
Key Features
Snapshot Creation: The tool automatically creates snapshots of the disks attached to a VM, ensuring that a consistent state of the disk is captured for migration.
Disks creation in target zone: It then creates disks in the target zone using the snapshots and abstract away the complexity of this operation.
Create VM and attach data disks: Now new VMs are being created in the new zone and all disks are attached to the new VM.
Logging: Throughout the process, the tool provides detailed logging, making it easy to monitor the migration and troubleshoot any issues that may arise.
How It Works
The script requires minimal setup. Users need to provide a CSV file with following information. A sample CSV has also been shared in the repository.
Names of the source VMs
Source VM Resource Group
Operating system type
New Resource Group where new VMs need to be created
Target zone where VMs need to be created
Size of the new VM.
Apart from the above information, following information also needs to be provided in the script.
Virtual Network Name
Subnet Name
Virtual Network
The tool then performs the following steps:
1. Capture Subnet ID: It starts by capturing the subnet ID that will be used in the VM creation process in the target zone.
2. Create Snapshot: The tool creates snapshots of the disks attached to the VM.
3. Create disks from the snapshot: The snapshots are then used to create disk in the target zone.
4. Create VM and Attach Disks: Finally, the new OS disk is used to create new VM in the target zone and the data disks are attached to the VM.
Getting Started
To use the tool, you’ll need Python 3.6.x and the Azure CLI installed on your machine. Then, simply run the script with `python az_vm_migration_tool.py`, passing the name of a CSV file containing the migration parameters as an argument.
Github repo
Conclusion
The Azure VM Migration Tool represents a significant step forward in simplifying the process of migrating VMs within Azure. By automating the migration process, it not only saves time but also reduces the potential for errors, making it an invaluable resource for anyone managing Azure VMs.
Microsoft Tech Community – Latest Blogs –Read More
Partner Blog | Building proficiency with Microsoft partner skilling
Microsoft partners understand that driving innovation and transformation requires current technical knowledge and expertise. They also understand how technology can help organizations enhance their competitive edge, maximize AI investments, encourage innovation, minimize risks, and handle data effectively.
At Microsoft, we are committed to providing our partner community with ongoing opportunities to enhance and expand their skills for building AI-enabled, customer-centric solutions across different modalities. Whether you are an engineer developing AI and cloud solutions, or you work directly with customers to sell these solutions, we have a course designed to improve your proficiency and overall knowledge, while also meeting Microsoft AI Cloud Partner Program skilling requirements.
In this blog, we will highlight some of the new skilling opportunities available to our partner community.
Continue reading here
Microsoft Tech Community – Latest Blogs –Read More
SELECT query is slow in AlwaysOn primary node than secondary node
Hi Experts, Could you please help me !!!
We have one database hosted on SQL Server 2019 AlwaysOn with database compatibility level 100 (SQL 2008 ). We are running one SQL View Select statement on primary node and it is taking around 9-10 seconds where as in secondary node taking 3-4 seconds. There is no much users or load on the primary node database.
For testing I have removed database from Alwayson and rerun the same query and it is taking 4 seconds on the primary node then I have added again database into Alwayson then the query time in primary node is again 9-10 seconds.
Could you please guide me how to improve the query performance on the primary node when database in Alwayson.
Thanks
Sreenivasa
Hi Experts, Could you please help me !!! We have one database hosted on SQL Server 2019 AlwaysOn with database compatibility level 100 (SQL 2008 ). We are running one SQL View Select statement on primary node and it is taking around 9-10 seconds where as in secondary node taking 3-4 seconds. There is no much users or load on the primary node database. For testing I have removed database from Alwayson and rerun the same query and it is taking 4 seconds on the primary node then I have added again database into Alwayson then the query time in primary node is again 9-10 seconds. Could you please guide me how to improve the query performance on the primary node when database in Alwayson. ThanksSreenivasa Read More
SharePoint Navigation Bar Moved to Side
Hello,
We have a communications page that normally has navigation across the top of the page, with options for a mega-menu/cascading. There was also a pane on the left side which allowed for audience targeting settings. I believe we may have inadvertently changed a setting which has relocated the navigation to the left side of the page and removed the audience targeting options.
The only options I am aware were changed were document sets and the content organizer. These were turned off but we still do not have the option to return the navigation bar to the top of the page.
Hello,We have a communications page that normally has navigation across the top of the page, with options for a mega-menu/cascading. There was also a pane on the left side which allowed for audience targeting settings. I believe we may have inadvertently changed a setting which has relocated the navigation to the left side of the page and removed the audience targeting options. The only options I am aware were changed were document sets and the content organizer. These were turned off but we still do not have the option to return the navigation bar to the top of the page. Read More
Power Query Doubling some fields after creating key and merging queries.
I have two queries in Power Query that I need to join together. I have created a key so each state has its own number. When I merge my Crop Report Query with my Harvested Query some states doubled up with different data coming from somewhere? My code below was filtered down to a single state that is showing the problem.
I have two queries in Power Query that I need to join together. I have created a key so each state has its own number. When I merge my Crop Report Query with my Harvested Query some states doubled up with different data coming from somewhere? My code below was filtered down to a single state that is showing the problem. Read More
Dualboot Linux/Windows no longer works Latest update has caused Linux to stop working
Hi all since the latest insider update Linux doesn’t seem to work on my Dualbooted W11/SteamOS SteamdeckOLED i am not sure what steps i can take on the Software side as Linux doesnt want to boot what so ever this has only happened since the latest version of Windows Security updates was installed?
Any help would be greatly appreciated
TIA
Hi all since the latest insider update Linux doesn’t seem to work on my Dualbooted W11/SteamOS SteamdeckOLED i am not sure what steps i can take on the Software side as Linux doesnt want to boot what so ever this has only happened since the latest version of Windows Security updates was installed? Any help would be greatly appreciatedTIA Read More
Query OU – Get Groups – Get Group Members – Export to CSV
The AI result did not work I am getting an error. The error is “Get-ADGroupMember: The input object cannot be bound to any parameters for the command either because the command does not take pipeline input or the input and its properties do not match any of the parameters that take pipeline input.”
Import-Module ActiveDirectory
# Specify the OU where your groups are located
$SearchBase = “CN=test,DC=test,DC=LOC”
# Get all groups within the specified OU
$Groups = Get-ADGroup -Filter * -Properties * -SearchBase $SearchBase
# Initialize an empty array to store group members
$GroupMembers = @()
foreach ($Group in $Groups) {
$Members = $Group | Get-ADGroupMember
foreach ($Member in $Members) {
$Info = New-Object psObject
$Info | Add-Member -MemberType NoteProperty -Name “GroupName” -Value $Group.Name
$Info | Add-Member -MemberType NoteProperty -Name “Description” -Value $Group.Description
$Info | Add-Member -MemberType NoteProperty -Name “Member” -Value $Member.Name
$GroupMembers += $Info
}
}
# Export the results to a CSV file
$GroupMembers | Sort-Object GroupName | Export-CSV C:tempgroup_members.csv -NoTypeInformation
The AI result did not work I am getting an error. The error is “Get-ADGroupMember: The input object cannot be bound to any parameters for the command either because the command does not take pipeline input or the input and its properties do not match any of the parameters that take pipeline input.” Import-Module ActiveDirectory# Specify the OU where your groups are located$SearchBase = “CN=test,DC=test,DC=LOC”# Get all groups within the specified OU$Groups = Get-ADGroup -Filter * -Properties * -SearchBase $SearchBase# Initialize an empty array to store group members$GroupMembers = @()foreach ($Group in $Groups) {$Members = $Group | Get-ADGroupMemberforeach ($Member in $Members) {$Info = New-Object psObject$Info | Add-Member -MemberType NoteProperty -Name “GroupName” -Value $Group.Name$Info | Add-Member -MemberType NoteProperty -Name “Description” -Value $Group.Description$Info | Add-Member -MemberType NoteProperty -Name “Member” -Value $Member.Name$GroupMembers += $Info}}# Export the results to a CSV file$GroupMembers | Sort-Object GroupName | Export-CSV C:tempgroup_members.csv -NoTypeInformation Read More
Partner Blog | Unlock the marketplace opportunity with certified software designations
Through the Microsoft AI Cloud Partner Program, we invest in key initiatives like Solutions Partner designations, which play a pivotal role in driving success in the marketplace for both partners and customers. This distinction helps partners showcase their validated skills, so customers can confidently choose the right partner for their needs.
When you become a Solutions Partner* with certified software**, customers can easily identify the quality, capability, reliability, and relevance of your software solution. That credibility is good for your business right now, but this designation also sets your organization up for long-term marketplace success by unlocking even more growth opportunities—like Azure IP co-sell top tier benefits and Azure partner-led offerings.
That is why we’re making it even easier for you to attain a certified software designation with a limited-time subsidy that covers 50% of the audit cost. The audit involves working with an independent auditing partner who will review submitted documentation to validate your solution’s technical interoperability and customer success.
Read on to learn more about certified software designations, their benefits, and how you can secure your subsidy to become a Solutions Partner with certified software.
Continue reading here
Microsoft Tech Community – Latest Blogs –Read More
Tackle Complex LLM Decision-Making with Language Agent Tree Search(LATS) & GPT-4O
Large Language Models (LLMs) have demonstrated exceptional abilities in performing natural language tasks that involve complex reasoning. As a result, these models have evolved to function as agents capable of planning, strategising, and solving complex problems. However, challenges persist when it comes to making decisions under uncertainty, where outcomes are not deterministic, or when adaptive decision-making is required in changing environments, especially in multi-step scenarios where each step influences the next. We need more advanced capabilities…
This is where GPT-4’s advanced reasoning capabilities and Language Agent Tree Search (LATS) come together to address these challenges. LATS incorporates a dynamic, tree-based search methodology that enhances the reasoning capabilities of GPT-4O. By integrating Monte Carlo Tree Search (MCTS) with LLMs, LATS unifies reasoning, acting, and planning, creating a more deliberate and adaptive problem-solving framework. This powerful combination allows for improved decision-making and more robust handling of complex tasks, setting a new standard in the deployment of language models as autonomous agents.
Is search the missing piece in GenAI problem solving?
Computational problem solving can be broadly defined as “search through a combinatorial problem space”, represented as a tree. Depth-First Search (DFS)and Breadth-First Search (BFS) are fundamental methods for exploring such solution spaces. A notable example of the power of deep search is AlphaGo’s “Move 37,” which showcased how innovative, human-surpassing solutions can emerge from extensive exploration.
Unlike traditional methods that follow predefined paths, LLMs can dynamically generate new branches within the solution space by predicting potential outcomes, strategies, or actions based on context. This capability allows LLMs to not only navigate but also expand the problem space, making them exceptionally powerful in situations where the problem structure is not fully known, is continuously evolving, or is highly complex.
Inference-time Reasoning with Meta Generation Algorithms (MGA)
midjourney — mayan puzzle needs to be resolved
Scaling compute during training is widely recognised for its ability to improve model performance. The benefits of scaling compute during inference remain under-explored. MGA’s offer a novel approach by amplifying computational resources during inference…
Unlike traditional token-level generation methods, meta-generation algorithms employ higher-order control structures such as planning, loops with multiple model calls, self-reflection, task decomposition, and dynamic conditioning. These mechanisms allow the model to execute tasks end-to-end, mimicking higher-level cognitive processes often referred to as Systems-2 thinking.
Therefore one-way meta generation algorithms may enhance LLM reasoning by integrating search into the generation process. During inference, MGA’s dynamically explore a broader solution space, allowing the model to reason through potential outcomes and adapt strategies in real-time. By generating multiple paths and evaluating their viability, meta generation algorithms enable LLMs to simulate deeper, more complex reasoning akin to traditional search methods. This approach not only expands the model’s ability to generate novel insights but also improves decision-making in scenarios with incomplete or evolving information.
Techniques like Tree of Thoughts (ToT), and Graph of Thought (GoT) are employed to navigate combinatorial solution spaces efficiently.
ToT (2*) enables hierarchical decision-making by structuring potential outcomes as tree branches, facilitating exploration of multiple paths.
GoT (6*)maps complex relationships between ideas, allowing the model to dynamically adjust and optimize its reasoning path.
CoT (5*) provides step-by-step reasoning that links sequential thoughts, improving the coherence and depth of the generation.
Why is LATS / MCTS better ?
LATS relies on MCTS
In the Tree of Thoughts (ToT) approach, a tree structure is used to represent different decision paths. Traditional methods like Depth-First Search (DFS) or Breadth-First Search (BFS) can navigate this tree, but they are computationally expensive because they explore each possible path systematically.
Monte Carlo Tree Search (MCTS) is an improvement on this by simulating different outcomes for actions and updating the tree based on these simulations. It uses a “selection” process where it picks decision nodes using a strategy that balances exploration (trying new paths) and exploitation (choosing known good paths). This is guided by a formula called Upper Confidence Bound (UCB).
The UCB formula has two key parts:
Exploration Term: This represents the potential reward of choosing a node and is calculated through simulations.
Exploitation Term: This decreases the deeper you go into a certain path, meaning that if a path is over-explored, the algorithm may shift to a less-explored path even if it seems less promising initially.
By selecting nodes using UCB, simulating outcomes with language models (LLMs), and backpropagating the rewards up the tree, MCTS effectively balances between exploring new strategies and exploiting known successful ones.
The second part of the UCB formula is the ‘exploitation term,’ which decreases as you explore deeper into a specific path. This decrease may lead the selection algorithm to switch to another path in the decision tree, even if that path has a lower immediate reward, because the exploitation term remains higher when that path is less explored.
Node selection with UCB, reward calculations with LLM simulations and backpropagation are the essence of MCTS.
An Implementation — Financial Decision Making…
Let’s say we want to use LLM’s for investment management. We feed the LLM the macro landscape and gives us three investment strategy options…
Iteration 1:
Selection: We start at the root, and since this is the first iteration, we select all initial strategies (A, B, and C) and simulate their outcomes.
Simulation & Backpropagation: Next LLM “simulates” each strategy based on the context it has and assigns the following “rewards” — investment returns — to each “node”.
Strategy A: $5,000
Strategy B: $7,000
Strategy C: $4,000
3. Expansion: Based on the selection, Strategy B has the highest UCB1 value (since all nodes are at the same depth), so we expand only Strategy B by simulating its child nodes.
First Expansion of the tree
Iteration 2:
Selection: Since B1 & B2 strategies are not simulated, there is a tie in terms of their UCB scores and both nodes will be simulated.
Simulate Both Nodes:
Simulate B1: LLM predicts a return of $8,500 for B1.
Simulate B2: LLM predicts a return of $7,500 for B2.
Expand node B
3. Backpropagation:
After each simulation results of the simulation are backpropagated up the tree, updating the values of the parent nodes. This step ensures that the impact of the new information is reflected throughout the tree.
Updating Strategy B’s Value: Strategy B now needs to reflect the outcomes of B1 and B2. One common approach is to average the rewards of B1 and B2 to update Strategy B’s value. Now, Strategy B has an updated value of $8,000based on the outcomes of its child nodes.
4. Recalculate UCB Scores:
After backpropagation, the UCB scores for all nodes in the tree are recalculated. This recalculation uses the updated values (average rewards) and visit counts, ensuring that each node’s UCB1 score accurately reflects both its potential reward and how much it has been explored.
UCB(s) = Reward of the node + (exploitation term)
5. Next Selection & simulation:
B1 is selected for further expansion (as it has a higher reward) into child nodes:
B1a: “Invest in AI companies”
B1b: “Invest in green tech”
6. Backpropagation
7.UCB1 Calculation
Next UCB values of all nodes are recalculated. Assume that due to the decaying exploration factor, B2 now has a higher UCB1 score than both B1aand B1b. This could occur if B1 has been extensively explored, reducing the exploration term for its children. Instead of continuing to expand B1’s children, the algorithm shifts back to explore B2, which has become more attractive due to its unexplored potential.
This example illustrates how MCTS can dynamically adjust its search path based on new information, ensuring that the algorithm remains efficient and focused on the most promising strategies as it progresses.
An Implementation with Azure OpenAI GPT4-O
Ok, next we will build a “financial advisor” using AzureOpenAI GPT4-o model, implementing LATS. (Refer to the Github repo here.)
(For an accurate analysis I am using the IMF World Economic Outlook report from July, 24 as my LLM context for simulations i.e. for generating child nodes and for assigning rewards to decision nodes …)
The code leverages the graphviz library to visually represent the decision tree generated during the execution of the investment strategy simulations. Below are snippets from parts of the resultant graph generated by a sample iteration.
Here is how the code runs – code execution video…
Decision tree is too wide and cannot be fit into a single picture hence I have added snippets as to how the tree looks below. You can find a sample decision tree in the github repo here…
Sample run of the MCTS code to find the best investment strategy in current macroeconomic climate
Below is the optimsal strategy inferred by LATS…
Optimal Strategy Summary: The optimal investment strategy is structured around several key steps influenced by the IMF report. Here’s a concise summary of each step and its significance:
1. **Diversification Across Geographies and Sectors:**
— **Geographic Diversification:** This involves spreading investments across regions to mitigate risk and tap into different growth potentials. Advanced economies like the U.S. remain essential due to their robust consumer spending and resilient labor market, but the portfolio should include cautious weighting to manage risks. Simultaneously, emerging markets in Asia, such as India and Vietnam, are highlighted for their higher growth potential, providing opportunities for higher returns.
— **Sector Diversification:** Incorporating investments in sectors like green energy and sustainability reflects the growing global emphasis on renewable energy and environmentally friendly technologies. This also aligns with regulatory changes and consumer preferences, creating future growth opportunities.
2. **Green Energy and Sustainability:**
— Investing in green energy demonstrates foresight into the global shift toward reducing carbon footprints and reliance on fossil fuels. This is significant due to increased governmental supports, such as subsidies and policy incentives, which are likely to propel growth within this sector.
3. **Fintech and E-Commerce:**
— Allocating capital towards fintech and e-commerce companies capitalizes on the digital transformation accelerated by the global shift towards digital platforms. This sector is expected to grow due to increased adoption of online services and digital payment systems, thus presenting promising investment opportunities.
Conclusion:
By integrating LATS, we harness the reasoning capabilities of LLMs to simulate and evaluate potential strategies dynamically. This combination allows for the construction of decision trees that not only represent the logical progression of decisions but also adapt to changing contexts and insights, provided by the LLM through simulations and reflections.
References:
Language Agent Tree Search: Unifying Reasoning, Acting, and Planning in Language Models by Zhou et al
Tree of Thoughts: Deliberate Problem Solving with Large Language Models by Yao et al
The Landscape of Emerging AI Agent Architectures for Reasoning, Planning, and Tool Calling: A Survey by Tula Masterman, Mason Sawtell, Sandi Besen, and Alex Chao
From Decoding to Meta-Generation: Inference-time Algorithms for Large Language Models” by Sean Welleck, Amanda Bertsch, Matthew Finlayson, Hailey Schoelkopf*, Alex Xie, Graham Neubig, Ilia Kulikov, and Zaid Harchaoui.
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models by Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed H. Chi, Quoc V. Le, and Denny Zhou
Graph of Thoughts: Solving Elaborate Problems with Large Language Modelsby Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Michał Podstawski, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Hubert Niewiadomski, Piotr Nyczyk, and Torsten Hoefler.
From Decoding to Meta-Generation: Inference-time Algorithms for Large Language Models” by Sean Welleck, Amanda Bertsch, Matthew Finlayson, Hailey Schoelkopf, Alex Xie, Graham Neubig, Ilia Kulikov, and Zaid Harchaoui.
Hope you enjoyed the content. Let me know any comments and please promote the content if you found it useful…
Subscribe to my newsletter on linkedin to keep upto date in GenAI space…Use this link…
Microsoft Tech Community – Latest Blogs –Read More