Tag Archives: microsoft
Swapping Places of Navigation and Context Bars in Windows 11 File Explorer
Hello, following a recent update to Windows 11, I observed a significant change in the layout of the file explorer interface. The navigation bar (which includes back, forward buttons, etc.) and the context bar (featuring functions like new, cut, copy, etc.) have exchanged positions. This adjustment has disrupted my muscle memory accustomed to navigating folders in a specific way. Attached is a screenshot for reference.
Would anyone happen to know a solution to revert to the previous layout configuration?
Hello, following a recent update to Windows 11, I observed a significant change in the layout of the file explorer interface. The navigation bar (which includes back, forward buttons, etc.) and the context bar (featuring functions like new, cut, copy, etc.) have exchanged positions. This adjustment has disrupted my muscle memory accustomed to navigating folders in a specific way. Attached is a screenshot for reference. Would anyone happen to know a solution to revert to the previous layout configuration? Read More
PowerShell Inquiry
Hello, I need assistance in adjusting the retention period for virus notifications. I am looking to have the virus notifications saved for a maximum of 1 day due to a few specific reasons.
I tried running the following PowerShell command as an administrator to achieve this:
“`
Set-MpPreference -QuarantinePurgeItemsAfterDelay 1
“`
However, I encountered an error message:
“`
Set-MpPreference : Operation failed with the following error: 0x%1!x!
At line:1 char:2
+ Set-MpPreference -QuarantinePurgeItemsAfterDelay 1
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (MSFT_MpPreference:rootMicrosoft…FT_MpPreference) [Set-MpPreference],
CimException
+ FullyQualifiedErrorId : HRESULT 0xc0000142,Set-MpPreference
“`
I am seeking guidance on how to make this adjustment successfully.
Operating System Version: 23H2 (OS Build 22631.2792)
Hello, I need assistance in adjusting the retention period for virus notifications. I am looking to have the virus notifications saved for a maximum of 1 day due to a few specific reasons. I tried running the following PowerShell command as an administrator to achieve this: “`Set-MpPreference -QuarantinePurgeItemsAfterDelay 1“` However, I encountered an error message: “`Set-MpPreference : Operation failed with the following error: 0x%1!x!At line:1 char:2+ Set-MpPreference -QuarantinePurgeItemsAfterDelay 1+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+ CategoryInfo : NotSpecified: (MSFT_MpPreference:rootMicrosoft…FT_MpPreference) [Set-MpPreference],CimException+ FullyQualifiedErrorId : HRESULT 0xc0000142,Set-MpPreference“` I am seeking guidance on how to make this adjustment successfully. Operating System Version: 23H2 (OS Build 22631.2792) Read More
Indexer Failing to Honor User Activity Levels for Backoff, Utilizing Full Resources Disregarding Set
Greetings to all!
I am in need of some assistance as I have discovered that my indexer backoff feature has not been functioning as expected. Despite my efforts to enable it through various methods such as adjusting the DisableIndexerBackoff setting in regedit, disabling the Disable Indexer Backoff setting in Group Policy, and restarting the Windows Search service, the indexer is still consuming a significant amount of my PC’s resources at maximum speed. Previously, the indexer used to display a message stating “Indexing speed is reduced because of user activity,” indicating normal functioning, but it no longer does so.
It is essential to mention that I have not made any changes that could have caused this issue. After a few Windows 11 updates and having the indexer backoff disabled for an extended period, when I attempted to enable it, it failed to work properly. This was not the case nearly half a year ago when everything was functioning correctly. I have extensively researched similar issues but have not found a satisfactory solution.
I am seeking advice on how to resolve the problem of the indexer backoff not reducing the indexing speed despite being enabled. Any insights or expertise from those who have encountered a similar issue would be greatly appreciated.
Thank you in advance for your assistance!
Greetings to all! I am in need of some assistance as I have discovered that my indexer backoff feature has not been functioning as expected. Despite my efforts to enable it through various methods such as adjusting the DisableIndexerBackoff setting in regedit, disabling the Disable Indexer Backoff setting in Group Policy, and restarting the Windows Search service, the indexer is still consuming a significant amount of my PC’s resources at maximum speed. Previously, the indexer used to display a message stating “Indexing speed is reduced because of user activity,” indicating normal functioning, but it no longer does so. It is essential to mention that I have not made any changes that could have caused this issue. After a few Windows 11 updates and having the indexer backoff disabled for an extended period, when I attempted to enable it, it failed to work properly. This was not the case nearly half a year ago when everything was functioning correctly. I have extensively researched similar issues but have not found a satisfactory solution. I am seeking advice on how to resolve the problem of the indexer backoff not reducing the indexing speed despite being enabled. Any insights or expertise from those who have encountered a similar issue would be greatly appreciated. Thank you in advance for your assistance! Read More
Question about PowerShell
Hello, I need to adjust the retention period for virus notifications. I would like to temporarily store virus notifications for a maximum of 1 day for various reasons.
I attempted to modify this setting using PowerShell with administrative privileges by running the following command:
> Set-MpPreference -QuarantinePurgeItemsAfterDelay 1
However, I encountered an error:
Set-MpPreference : The operation failed with the following error: 0x%1!x!
At line:1 char:2
+ Set-MpPreference -QuarantinePurgeItemsAfterDelay 1
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (MSFT_MpPreference:rootMicrosoft…FT_MpPreference) [Set-MpPreference],
CimException
+ FullyQualifiedErrorId : HRESULT 0xc0000142,Set-MpPreference
Can someone assist me in making this adjustment?
Operating System Version: 23H2 (OS Build 22631.2792)
Hello, I need to adjust the retention period for virus notifications. I would like to temporarily store virus notifications for a maximum of 1 day for various reasons. I attempted to modify this setting using PowerShell with administrative privileges by running the following command: > Set-MpPreference -QuarantinePurgeItemsAfterDelay 1 However, I encountered an error: Set-MpPreference : The operation failed with the following error: 0x%1!x!At line:1 char:2+ Set-MpPreference -QuarantinePurgeItemsAfterDelay 1+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+ CategoryInfo : NotSpecified: (MSFT_MpPreference:rootMicrosoft…FT_MpPreference) [Set-MpPreference],CimException+ FullyQualifiedErrorId : HRESULT 0xc0000142,Set-MpPreference Can someone assist me in making this adjustment? Operating System Version: 23H2 (OS Build 22631.2792) Read More
Teams and Area permissions
In a Project I have 3 Teams. In each Teams I have one Area.
I want each member of a team to not be able to create User Story in other teams.
I have TEAM1 with AREA1
I have TEAM2 with AREA2
I have TEAM3 with AREA3
In Project Configuration :
For AREA1 I Deny access to TEAM2 and TEAM3
For AREA2 I Deny access to TEAM1 and TEAM3
For AREA3 I Deny access to TEAM1 and TEAM2
It works very well if a member is only in one Team.
But it don’t work if a member is in 2 Teams : in this case the member can’t access at any User Stoty
Can somebody help me ?
In a Project I have 3 Teams. In each Teams I have one Area.I want each member of a team to not be able to create User Story in other teams. I have TEAM1 with AREA1I have TEAM2 with AREA2I have TEAM3 with AREA3In Project Configuration :For AREA1 I Deny access to TEAM2 and TEAM3For AREA2 I Deny access to TEAM1 and TEAM3For AREA3 I Deny access to TEAM1 and TEAM2It works very well if a member is only in one Team.But it don’t work if a member is in 2 Teams : in this case the member can’t access at any User StotyCan somebody help me ? Read More
All the mail from one mail adress arrive in quarantine with an SCL = 5
All the emails sent to us by our customer (email address removed for privacy reasons) arrive in our quarantine with an SCL score of 5.
However, the email address passes the DMARC tests perfectly (test carried out with https://www.dmarctester.com/).
The domain is not blacklisted, and emails from his colleagues email address removed for privacy reasons and email address removed for privacy reasons arrive with no problem.
The content of the email shouldn’t be the problem either, as an empty email is also quarantined.
What additional diagnostic work can I do to understand why the SCL for each of his emails scores 5?
All the emails sent to us by our customer (email address removed for privacy reasons) arrive in our quarantine with an SCL score of 5. However, the email address passes the DMARC tests perfectly (test carried out with https://www.dmarctester.com/).The domain is not blacklisted, and emails from his colleagues email address removed for privacy reasons and email address removed for privacy reasons arrive with no problem.The content of the email shouldn’t be the problem either, as an empty email is also quarantined. What additional diagnostic work can I do to understand why the SCL for each of his emails scores 5? Read More
Microsoft Sentinel & Cyberint Threat Intel Integration Guide
Microsoft Sentinel & Cyberint IOC Module Integration Guide
In today’s cybersecurity landscape, threat intelligence plays a critical role in identifying and mitigating potential threats. Microsoft Sentinel, a powerful cloud-native SIEM (Security Information and Event Management) solution, provides robust capabilities for security monitoring and incident response.
Integrating Microsoft Sentinel with Cyberint (Cyberint – Threat Intelligence & Digital Risk Protection) module enhances its ability to detect and respond to emerging threats using threat intelligence feeds.
This guide outlines the steps to integrate Cyberint’s module with Microsoft Sentinel, enabling you to leverage enriched threat intelligence data for more effective security operations.
PREQUISITES
1. Ensure you have an active Azure account with sufficient permissions to create resources
2. Active Cyberint account. (To get the API Token & URL)
This blog will guide you through the steps for integrating with Cyberint TI feeds and how to troubleshoot various issues that may arise during integration. Here is a brief summary of the steps needed
Log in to your Azure account.
Create a new Logic App
Ensure that Managed Identity for the Logic app is enabled.
Switch to Code view and paste in the JSON code
Use JSON Lint to verify and validate the Json Format.
Save the Logic App code.
Add a Switch-Case to handle HTTP action redirect status code 307.
Add steps for delay action to handle the Status code 429.
Configure the Logic App to execute daily.
Add Retry Policy if Status code 429 persists.
Grant Microsoft Sentinel Contributor Role to Logic App at the Resource Group Level.
Create a Blank logic app
1. Sign In to Azure Portal
Go to: Azure Portal
Log in with your Azure credentials.
2. Create a new Logic App
Navigate to: All services > Logic Apps
Click: + Add or + Create
Configure Basics:
Subscription: Select your Azure subscription.
Resource Group: Choose or create a new one.
Logic App Name: Enter a unique name.
Region: Choose your preferred region.
Select Type: Choose Logic App (Consumption) for pay-as-you-go pricing.
Click: Review + Create, then Create.
3. Ensure that the Logic app’s Managed Identity
Under the “Settings” section in the navigation bar, select “Identity”
Switch the “Status” slider to “On” and verify that you wish to perform this action.
You will assign role assignments later in the Blog post.
4. Switch to Code View to paste in JSON code
After activating the managed Identity, proceed to the Code view within Logic app.
Under the “Development Tools” section in the navigation bar, select “Logic app code view”
Insert the following code, making sure to substitute the elements marked in yellow with the relevant information specific to your environment.
The information you will need to gather is:
Microsoft Sentinel Subscription ID
Microsoft Sentinel Resource Group Name
Microsoft Sentinel Deployment Region
Cyberint API Token
Cyberint Environment URL
**Utilize the following code provided by CYBERINT to implement the foundational logic structure. Substitute the sections highlighted in Red with the appropriate values.
———————————————————————————————————–
———————————————————————————————————–
{
“definition”: {
“$schema”: “https://schema.management.azure.com/providers/Microsoft.Logic/schemas/2016-06-01/workflowdefinition.json#“,
“actions”: {
“Compose”: {
“inputs”: “@split(variables(‘input’), ‘n’)”,
“runAfter”: {
“Initialize_variable”: [
“Succeeded”
]
},
“type”: “Compose”
},
“Filter_array”: {
“inputs”: {
“from”: “@outputs(‘Compose’)”,
“where”: “@not(equals(item(), ”))”
},
“runAfter”: {
“Compose”: [
“Succeeded”
]
},
“type”: “Query”
},
“Follow_redirect_http”: {
“inputs”: {
“method”: “GET”,
“uri”: “@{outputs(‘HTTP’)[‘headers’][‘location’]}”
},
“runAfter”: {
“HTTP”: [
“Failed”
]
},
“type”: “Http”
},
“For_each”: {
“actions”: {
“Parse_JSON_2”: {
“inputs”: {
“content”: “@items(‘For_each’)”,
“schema”: {
“properties”: {
“confidence”: {
“type”: “integer”
},
“description”: {
“type”: “string”
},
“detected_activity”: {
“type”: “string”
},
“ioc_type”: {
“type”: “string”
},
“ioc_value”: {
“type”: “string”
},
“observation_date”: {
“type”: “string”
},
“severity_score”: {
“type”: “integer”
}
},
“type”: “object”
}
},
“runAfter”: {},
“type”: “ParseJson”
},
“Threat_Intelligence_-_Upload_Indicators_of_Compromise_(V2)_(Preview)”: {
“inputs”: {
“body”: {
“indicators”: [
{
“confidence”: “@{body(‘Parse_JSON_2’)?[‘confidence’]}”,
“created”: “@{utcNow()}”,
“description”: “@{body(‘Parse_JSON_2’)?[‘description’]}”,
“external_references”: [],
“granular_markings”: [],
“id”: “indicator–@{guid()}”,
“indicator_types”: [
“@{body(‘Parse_JSON_2’)?[‘detected_activity’]}”
],
“kill_chain_phases”: [
{
“kill_chain_name”: “mandiant-attack-lifecycle-model”,
“phase_name”: “establish-foothold”
}
],
“labels”: [
“cyberint”
],
“lang”: “”,
“modified”: “@{utcNow()}”,
“name”: “@{body(‘Parse_JSON_2’)?[‘ioc_value’]}”,
“object_marking_refs”: [],
“pattern”: “[ipv4-addr:value = ‘@{body(‘Parse_JSON_2’)?[‘ioc_value’]}’]”,
“pattern_type”: “ipv4-addr”,
“spec_version”: “2.1”,
“type”: “indicator”,
“valid_from”: “@{body(‘Parse_JSON_2’)?[‘observation_date’]}”
}
],
“sourcesystem”: “Cyberint”
},
“host”: {
“connection”: {
“name”: “@parameters(‘$connections’)[‘azuresentinel’][‘connectionId’]”
}
},
“method”: “post”,
“path”: “/V2/ThreatIntelligence/@{encodeURIComponent(‘<Microsoft Sentinel workspaceid>’)}/UploadIndicators/”
},
“runAfter”: {
“Parse_JSON_2”: [
“Succeeded”
]
},
“type”: “ApiConnection”
}
},
“foreach”: “@body(‘Filter_array’)”,
“runAfter”: {
“Filter_array”: [
“Succeeded”
]
},
“type”: “Foreach”
},
“HTTP”: {
“inputs”: {
“cookie”: “access_token=<cyberint api token>“,
“method”: “GET”,
“queries”: {
“date”: “@{formatDateTime(utcNow(), ‘yyyy-MM-dd’)}”,
“detected_activity”: “cnc_server”,
“ioc_type”: “ipv4”
},
“uri”: “https://<cyberint environment url>/ioc/api/v1/feed/daily”
},
“runAfter”: {},
“type”: “Http”
},
“Initialize_variable”: {
“inputs”: {
“variables”: [
{
“name”: “input”,
“type”: “string”,
“value”: “@{body(‘Follow_redirect_http’)}”
}
]
},
“runAfter”: {
“Follow_redirect_http”: [
“Succeeded”
]
},
“type”: “InitializeVariable”
}
},
“contentVersion”: “1.0.0.0”,
“outputs”: {},
“parameters”: {
“$connections”: {
“defaultValue”: {},
“type”: “Object”
}
},
“triggers”: {
“Recurrence”: {
“evaluatedRecurrence”: {
“frequency”: “Week”,
“interval”: 1
},
“recurrence”: {
“frequency”: “Week”,
“interval”: 1
},
“type”: “Recurrence”
}
}
},
“parameters”: {
“$connections”: {
“value”: {
“azuresentinel”: {
“connectionId”: “/subscriptions/<azure subscriptionid>/resourceGroups/<Sentinel Resource Group Name>/providers/Microsoft.Web/connections/azuresentinel”,
“connectionName”: “azuresentinel”,
“id”: “/subscriptions/<azure subscriptionid>/providers/Microsoft.Web/locations/<deployment Region>/managedApis/azuresentinel”
}
}
}
}
}
———————————————————————————————————————————————————————————————————————-
5. Utilize Json Lint Validator
Since you have modified the JSON code, it makes sense to double check it. In a new tab or window in your browser, go to JSON Online Validator and Formatter – JSON Lint, paste in your modified code, and then click on the green “Validate JSON” button.
Fix any errors that may show up and repeat the process until the JSON passes. Copy the modified code if you made any changes back into the Logic App.
6. Save the Logic App code
In the Logic App code view page, click on the “Save” button. The Azure portal notifications bell will show that this activity is running. You can click on that to see if any errors have occurred.
7. Implement the Switch Case Action
There is an additional Switch-Case Action required (to handle the Http Action Redirect) to be added once the above code is deployed, follow below instructions to update the above logic app
In the “Development Tools” in the navigation menu, select “Logic App designer” to switch back to the graphical view. Note: You can also get to this view by clicking on the “Edit” button in the “Overview” page.
The Switch action is to be added after the HTTP action:
Use the following steps to add the needed actions
Use Add an action:
2. Search for the “Switch” action and select it:
Add Status Code value to be fetched from previous HTTP step as:
Make sure your Switch action has the “Run After” options ‘Has Failed’ & ‘Is Successful’ checked under the “Settings” tab
3. Click on Add Case button:
Add an exact status code (307) value to Case2 as shown below:
Add new HTTP Action in the case:
Search for the “HTTP” action and select it
We need to fetch the new relocated location from our previous step into this HTTP2 action by using the following string ‘@{outputs(‘HTTP’)[‘headers’][‘location’]}’ respectively as and ensure to use GET method respectively:
Open Http 2 and add string ‘@{outputs(‘HTTP’)[‘headers’][‘location’]}’:
8. Add Additional Delay action
There may be a case where the JSON receives a status code of 429. To resolve that add a for Each loop after parse JSON 2 to resolve it
Click the Add Action button that is directly under the “Parse JSON 2” action.
Search for “Delay” and select it
Set its “Count” to 5 and change the “Unit” to “Second”
More information on the status code 429 can be found at the Official Microsoft Reference links:
1.Microsoft Sentinel – Connectors | Microsoft Learn
2.https://learn.microsoft.com/en-us/azure/logic-apps/handle-throttling-problems-429-errors?tabs=consumption
9. Adjust the recurrence of the Logic App
This Logic App should run daily because Cyberint produces threat intelligence feeds every day; this is a recommended practice compared to the default weekly schedule. Optionally, a specific time of day can be selected for the Logic App to execute.
Select the “Recurrence” trigger at the beginning of the Logic App”
Change the “Interview” to “1” and the “Frequency” to “Day”
If you wish to have this Logic app run at a specific time, use the “At These Hours” and “At These Minutes” fields to specify when you want the Logic App to run as shown in the image below
10. Adding Retry Policy if Status code 429 persist:
In Case if the Logic app still fails due to 429 as depicted below, we will add a retry policy
Follow the steps to add a retry policy:
1. Navigate to Logic app Designer.
2. Get to the Threat Intelligence Upload indicator of Compromise Step in Logic app.
3. Check Settings tab as depicted:
Under Networking select the Retry Policy and select Fixed Interval
Provide the count and Interval as required (the logic app currently have 4 counts 20s of interval)
11. Grant Microsoft Sentinel Contributor Role to Logic App at the Resource Group Level
To resolve the Unauthorized issue at the last step for Logic app, the Logic App’s managed identity will need Microsoft Sentinel contributor rights. Use the following steps to grant this right:
Login to Azure portal(portal.azure.com)
Go to the Microsoft Sentinel’s Resource Group.
Navigate to “Access Control (IAM)”
4. Click on the “Add” button and select “Add role assignment”
5. Select “Microsoft Sentinel Contributor” role and then click the “Next” button at the bottom of the screen
6. Select the “Managed Identity” radio button
7. Click “Select members”
8. Select the correct Subscription
9. In the “Managed Identity” drop down, select “Logic app”
10. Find the name of the Logic App and select it.
11. Click the “Select” button at the bottom of the page.
12. Click the “Review and assign” button at the bottom of the page to assign the permission
The Logic App is now ready to be run daily to ingest the Cyberint Threat Intelligence data.
The verify that the data is being ingested, you can use the KQL below to validate.
ThreatIntelligenceIndicator
| where SourceSystem contains “Cyberint”
Microsoft Tech Community – Latest Blogs –Read More
Learn about AppJetty’s ISV Success for Business Applications solution in Microsoft AppSource
Microsoft ISV Success for Business Applications offers platforms, resources, and support designed to help partners develop, publish, and market business apps. Learn more about this offer from AppJetty:
MappyField 365: MappyField 365 is a powerful geo-mapping plugin for Microsoft Dynamics 365 that boosts business productivity with advanced features like live tracking, geographic data visualization, proximity search, auto-scheduling, auto check-ins, territory management, and heat maps. Accelerate your business across organizations with location intelligence from AppJetty.
Microsoft Tech Community – Latest Blogs –Read More
Custom views in new Outlook client gone
It appears in the new Outlook client you cannot add custom views anymore like explained in this article https://support.microsoft.com/en-us/office/create-change-or-customize-a-view-f693f3d9-0037-4fa0-9376-3a57b6337b71 – is there another option in the new teams for this.
I have quite a few users now reporting this and I can’t find a MSFT documented way to get something back in place for our users.
It appears in the new Outlook client you cannot add custom views anymore like explained in this article https://support.microsoft.com/en-us/office/create-change-or-customize-a-view-f693f3d9-0037-4fa0-9376-3a57b6337b71 – is there another option in the new teams for this. I have quite a few users now reporting this and I can’t find a MSFT documented way to get something back in place for our users. Read More
Function disappears after Import-Module
I have a problem with Import-Module.
I have a psm1 file which exports function “Write-FromWithinPsmFile“. This psm1 is imported into two ps1 files using Import-Module.
Also the main ps1 files calls the other ps1 file.
Now I found two cases, where calling the function “Write-FromWithinPsmFile” works fine and the other way shows some confusing result, which I cannot explain. The function “disappears” after the main script executes the second script. But only in case it executes the second script from within a function.
This is the Test.psm1 file:
function Write-FromWithinPsmFile($Text)
{
Write-Host “code in Test.psm1: $Text”
}
Export-ModuleMember -Function Write-FromWithinPsmFile
This is the script Start.ps1:
param (
[Parameter()]
[switch]
$WillFail
)
Import-Module “C:tmpTest.psm1” -Scope Local -Force -ErrorAction Stop
function Invoke-ScriptFromFunction
{
. “C:tmpCalledScript_repro.ps1”
}
Write-FromWithinPsmFile “called from Start.ps1 – first time”
if ($WillFail)
{
# execute the script from within a function
Invoke-ScriptFromFunction
}
else
{
# execute the script directly
. “C:tmpCalledScript_repro.ps1”
}
Write-FromWithinPsmFile “called from Start.ps1 – second time”
And this is the CalledScript_repro.ps1, which is called by the main script:
Import-Module “C:tmpTest.psm1” -Scope Local -ErrorAction Stop -Force
# do something …..
This is the successfull execution: powershell -File C:tmpStart.ps1 -Verbose
This is the failing one: powershell -File C:tmpStart.ps1 -WillFail -Verbose
It tells me:
Write-FromWithinPsmFile : The term ‘Write-FromWithinPsmFile’ is not recognized as the name of a
cmdlet, function, script file, or operable program.
Both calls will succeed, in case I change the CalledScript_repro.ps1 to this (I removed the -Force):
Import-Module “C:tmpTest.psm1” -Scope Local -ErrorAction Stop
# do something …..
Some time ago I added the -Force, because it is more convenient when debugging and writing the code.
So my question is:
What am I doing wrong? Do I need to do the Remove-Module explicitly? How should I use the Import-Module?
I have a problem with Import-Module.I have a psm1 file which exports function “Write-FromWithinPsmFile”. This psm1 is imported into two ps1 files using Import-Module.Also the main ps1 files calls the other ps1 file.Now I found two cases, where calling the function “Write-FromWithinPsmFile” works fine and the other way shows some confusing result, which I cannot explain. The function “disappears” after the main script executes the second script. But only in case it executes the second script from within a function. This is the Test.psm1 file: function Write-FromWithinPsmFile($Text)
{
Write-Host “code in Test.psm1: $Text”
}
Export-ModuleMember -Function Write-FromWithinPsmFile This is the script Start.ps1: param (
[Parameter()]
[switch]
$WillFail
)
Import-Module “C:tmpTest.psm1” -Scope Local -Force -ErrorAction Stop
function Invoke-ScriptFromFunction
{
. “C:tmpCalledScript_repro.ps1”
}
Write-FromWithinPsmFile “called from Start.ps1 – first time”
if ($WillFail)
{
# execute the script from within a function
Invoke-ScriptFromFunction
}
else
{
# execute the script directly
. “C:tmpCalledScript_repro.ps1”
}
Write-FromWithinPsmFile “called from Start.ps1 – second time” And this is the CalledScript_repro.ps1, which is called by the main script: Import-Module “C:tmpTest.psm1” -Scope Local -ErrorAction Stop -Force
# do something ….. This is the successfull execution: powershell -File C:tmpStart.ps1 -VerboseThis is the failing one: powershell -File C:tmpStart.ps1 -WillFail -VerboseIt tells me:Write-FromWithinPsmFile : The term ‘Write-FromWithinPsmFile’ is not recognized as the name of acmdlet, function, script file, or operable program.Both calls will succeed, in case I change the CalledScript_repro.ps1 to this (I removed the -Force): Import-Module “C:tmpTest.psm1” -Scope Local -ErrorAction Stop
# do something ….. Some time ago I added the -Force, because it is more convenient when debugging and writing the code.So my question is:What am I doing wrong? Do I need to do the Remove-Module explicitly? How should I use the Import-Module? Read More
need help transforming data
Hi,
I basicly have the left table and want to create the right one but with a larger data set. Annybody knows how to do this in Excel? Does this proces have a name?
Hi, I basicly have the left table and want to create the right one but with a larger data set. Annybody knows how to do this in Excel? Does this proces have a name? Read More
Microsoft Intune agent on Linux
hi team
we enroll a Linux device successfully with Intune.
Linux os is ubuntu 22.04.4 LTS
Intune agent is Ver 1.2405.17
every time after closing the agent or restarting the os the Intune agent does not connect automatically and asks for login.
is this the normal behavior in Linux or am I missing something? 🙂
hi team we enroll a Linux device successfully with Intune.Linux os is ubuntu 22.04.4 LTS Intune agent is Ver 1.2405.17every time after closing the agent or restarting the os the Intune agent does not connect automatically and asks for login.is this the normal behavior in Linux or am I missing something? 🙂 Read More
Outlook deeplink calendar compose ends with a blank screen on save
Recently we noticed any deeplinks we use for composing events into the outlook calendar end with a blank screen. This used to close automatically on complete.
We have an offering of our app in msTeams so this flow is very confusing for users as it’s not apparent a new tab has been opened in msteams.
We are working towards using teams specific deeplinks for calendar events but we still need the outlook deeplink to work as expected for web versions of our app.
An example of this behaviour from another thread this was mentioned in that has had no response:
https://www.youtube.com/watch?v=TqEkn5SGQPU
Recently we noticed any deeplinks we use for composing events into the outlook calendar end with a blank screen. This used to close automatically on complete.We have an offering of our app in msTeams so this flow is very confusing for users as it’s not apparent a new tab has been opened in msteams. We are working towards using teams specific deeplinks for calendar events but we still need the outlook deeplink to work as expected for web versions of our app.An example of this behaviour from another thread this was mentioned in that has had no response: https://www.youtube.com/watch?v=TqEkn5SGQPU Read More
Cross-workspace incident management
Hello Techcommunity, We are looking for a solution to manage incidents in several Sentinel workspaces within the same tenant. 1. We reviewed Azure Lighthouse and it seems to be working only for cross-tenant management2. We saw the option to mark the workspaces we want to monitor and click on “View incidents”3. We also considered building the dashboard in a Workbook Could you please say if there is any other option to have a unified dashboard for managing incidents from several Sentinels within the same tenant? Read More
How do I change a chart legend’s marker size without affecting the main marker
How can I change the marker size in the legend of an Excel chart without affecting the main marker in the plot area? Sometimes it is necessary to have small plot markers which affect the legend’s markers size, which makes the legend unclear, as seen in the attached image.
Thanks.
How can I change the marker size in the legend of an Excel chart without affecting the main marker in the plot area? Sometimes it is necessary to have small plot markers which affect the legend’s markers size, which makes the legend unclear, as seen in the attached image.Thanks. Read More
Question: confidential calls
Hello!
Can someone please let me know if it is possible to disable Co-pilot for confidential calls?
Thank you!
Hello! Can someone please let me know if it is possible to disable Co-pilot for confidential calls? Thank you! Read More
SSRS – Pagination of two tables in a single page
Question :
we are facing an issue to achieve the pagination of data from two tables when these both tables are placed under single page. The requirement is to get the 12 rows from each table and show in the respective table of a page in SSRS report.
Current Result : Pagination of second table is not showing under the same page where the first table pagination rendered. Second page pagination is getting started only after the pagination for data of first table is completed. Overall the the pagination is happening sequential. But we need the pagination to happen parallel for both tables under the same page.
Any help is much appreciated in advance.
Expected : Pagination of first and second tables should be rendered in the same page across all the pages rendered as part of the pagination.
Question :we are facing an issue to achieve the pagination of data from two tables when these both tables are placed under single page. The requirement is to get the 12 rows from each table and show in the respective table of a page in SSRS report. Current Result : Pagination of second table is not showing under the same page where the first table pagination rendered. Second page pagination is getting started only after the pagination for data of first table is completed. Overall the the pagination is happening sequential. But we need the pagination to happen parallel for both tables under the same page. Any help is much appreciated in advance. Expected : Pagination of first and second tables should be rendered in the same page across all the pages rendered as part of the pagination. Read More
The .Net response code changed from a 201 to a 200 on all .Net framework versions from 4.8 and below
The .Net response code changed from a 201 to a 200 on all .Net framework versions from 4.8 and below which is very weird. I upgrade this to a .net core or higher version the status code returned is a 201.can someone please help me solve this mystery?
The .Net response code changed from a 201 to a 200 on all .Net framework versions from 4.8 and below which is very weird. I upgrade this to a .net core or higher version the status code returned is a 201.can someone please help me solve this mystery? Read More
Introducing Semantic Workbench: Your Gateway to Agentic AI Development
Introducing Semantic Workbench: Your Gateway to Agentic AI Development
In the rapidly evolving landscape of artificial intelligence (AI), the ability to quickly prototype and integrate intelligent assistants is becoming increasingly crucial. Whether you’re developing a new agent from scratch or integrating an existing one, having a versatile and user-friendly tool can make all the difference. Enter Semantic Workbench — a powerful tool designed to streamline the creation and management of intelligent agents.
The Semantic Workbench comes from our own efforts inside Microsoft to explore these ideas, it is a platform in active use. We decided to make this available as multiple teams inside Microsoft are finding value in it and we believe the broader community will as well. Separating it out as an independent project will enable teams that want to use this to add capabilities for their scenarios which will benefit everyone using it.
What is Semantic Workbench?
Semantic Workbench is a versatile tool designed to help you prototype intelligent agents quickly and efficiently. It supports the creation of new assistants or the integration of existing ones, all within a cohesive and intuitive interface. The workbench provides a user-friendly UI for creating conversations with one or more assistants, configuring settings, and exposing various behaviors.
Semantic Workbench is composed of three main components, each playing a crucial role in its functionality:
Workbench Service: The backend service that handles core functionalities, such as agents’ interaction, conversations, file uploads, authentication, and more.
Workbench App: The frontend web user interface that allows users to interact with the workbench and assistants seamlessly.
Agent Services: Any number of agent services that implement the workbench service protocol API, developed using any framework and programming language of your choice.
Why Use Semantic Workbench?
Simplified AI Agents Development
Writing a new agent and multi-agent solutions with Semantic Workbench is very simple. Developers can focus on the most important aspects, such as messaging, handling events, and executing commands. The integration with the workbench is a non-intrusive thin layer that can be removed when agents are ready for production, ensuring a smooth transition from development to deployment.
Versatility and Flexibility
Designed to be agnostic of any agent framework, language, or platform, Semantic Workbench facilitates experimentation, development, testing, and measurement of agent behaviors and workflows. This flexibility allows developers to integrate agents via a RESTful API, making it broadly applicable in various development environments. Agents can be written in any programming language and can interact with the user and each other via the Semantic Workbench web service.
User-Friendly Interface with Real-Time Insights and Debugging
Semantic Workbench provides a cohesive and intuitive interface for creating and managing conversations with intelligent assistants. This user-friendly UI simplifies the process of configuring settings and exposing various behaviors, making it accessible even to those who may not have extensive technical expertise. The interface allows users to chat with agents and offers other utilities to aid development, such as attaching debugging information to each message, including details about agent behavior, costs, and performance.
Agents can publish insights, called “states”, that are visible on side panels. These insights can be HTML/Markdown pages, allowing for the publication of complex information. Agents can update these insights in real-time, providing continuous feedback and debugging information. This feature is invaluable for developers looking to fine-tune agent behaviors and performance.
Customizable Configuration
Agents can have custom configuration settings, which are completely customizable. This includes text areas, radio buttons, checkboxes, and more, providing developers with the flexibility to tailor the settings to their specific needs. Agents can also expose configuration options such as prompts and temperature into assistant options making end user customization possible to improve iteration without having to update code.
File Management, Persistence and Cloning
The workbench also supports file management, allowing both users and agents to upload and manage files. This means AI agents can generate output as files that can be downloaded or shared with other agents, facilitating a more integrated and collaborative development environment.
Conversations and agents persist on disk, so you can develop an agent, restart it, and continue without losing state. Conversations can also be downloaded, making it easy to share demos and test results. Additionally, agents can be cloned to work with different configurations, allowing for parallel development and testing.
Conversation management
There are options within the conversation interface to delete messages from the conversation, as well as the option to rewind the conversation to a specific point which removes all the messages past that. You also have the option to add files to the conversation via a side panel or as part of a message. The side panel also provides you the option to export a conversation as markdown. There is an option to import past conversations from the dashboard screen as well.
The chat interface also informs you as to how many tokens your message is going to use.
Responses from generative assistants provide an info button which allows you to access the underlying messages that were sent to generate that response and examine details such as how many tokens were used.
Support for Multiple Users
The workbench supports multiple users. Conversations with assistants support multiple participants. Links to a conversation can be shared with other users to join. The workbench is configured by default to work with Microsoft organization or personal accounts.
Getting Started with Semantic Workbench
To get started with Semantic Workbench, you can follow these steps:
Clone the Repository: start by cloning the Semantic Workbench repository from GitHub at https://github.com/microsoft/semanticworkbench
Set Up the Workbench Service: Install and configure the backend service to handle core functionalities.
Launch the Workbench App: Use the frontend web interface to interact with the workbench and assistants.
Integrate your Agent Services: Develop and integrate agent services using any framework and programming language of your choice, following the service protocols/APIs. Agents can be developed with any editor, from VS Code and Visual Studio, to Rider, PyCharm, and any IDE.
You can get started even more quickly by opening the repository locally in VS Code, and letting it open the project as a dev container. In this mode all the required dependencies are provided automatically and there are launch configurations provided to start the Workbench Service and App. You can also open the repository directly in GitHub Codespaces for editing, please see this readme on an additional required step for running it in Codespaces.
Using the Semantic Workbench
On your first run of the Semantic Workbench, you’ll need to agree to the terms and sign in. This step is necessary even for local use because the workbench supports multiple users. Once signed in, you’ll be greeted by the Semantic Workbench dashboard.
The dashboard allows you to manage assistants. You can select an assistant from a dropdown menu if you’ve already created one or start a new one. Creating a new assistant involves selecting it from the dropdown and providing a name. The repository includes a Canonical Assistant that echoes input, which can be used as a starting point for your projects.
After saving your assistant you can select it and change any of its default setting if you like. Assistants can surface settings such as prompts, temperature, etc. here which makes it very convenient to iterate ideas without having to go back to code. This allows users who are not developers to experiment and provide feedback at earlier stages of development of generative assistants.
You can start a new conversation from either the assistant screen or the main dashboard. Opening a conversation lets you interact with your agent using a basic chat interface. You can see a response from the Echo assistant here:
If multiple assistants are involved, you can direct messages to a specific one using the “Directed to” box. The message box also reports how many tokens the message you are sending will use.
The side panel lets you see which participants across users and assistants are in the conversation. You can also download a transcript of the conversation to use it in other contexts. If there are files in the conversation they will be displayed here, and more can be added.
Explore the GitHub Repository
Semantic Workbench is publicly available on GitHub at https://github.com/microsoft/semanticworkbench. The repository contains some examples in Python and .NET, demonstrating how agents can be written in different languages. These examples also show how to leverage Azure AI Content Safety for responsible AI bots, how to render complex content types such as Mermaid graphs, Markdown, HTML, and more.
Semantic Workbench is a game-changer for anyone looking to prototype and integrate Agentic AI solutions quickly and efficiently. Its versatility, user-friendly interface, real-time insights, and seamless integration capabilities make it an invaluable tool in the AI development toolkit. Whether you’re a seasoned developer or just starting, Semantic Workbench provides the flexibility and functionality you need to bring your intelligent assistant projects to life.
Microsoft Tech Community – Latest Blogs –Read More
How to Automate KB5040434 Installation on Multiple VMs?
Hey everyone,
I need to install the KB5040434 update on a bunch of VMs. This update is super important because it fixes several vulnerabilities. Doing this one by one is a huge hassle, and each VM also needs a restart after the update.
Is there a way to automate this process? Maybe using Azure Cloud Shell, an automation account, or some other Azure feature? Any tips or guides would be really helpful.
Thanks in advance!
Hey everyone,I need to install the KB5040434 update on a bunch of VMs. This update is super important because it fixes several vulnerabilities. Doing this one by one is a huge hassle, and each VM also needs a restart after the update.Is there a way to automate this process? Maybe using Azure Cloud Shell, an automation account, or some other Azure feature? Any tips or guides would be really helpful.Thanks in advance! Read More