Tag Archives: microsoft
Using OR in a formula
I need a formula to give 3 different answers based on the value of one cell in a worksheet that could change.
J19 is the variable cell in my worksheet. The value of (J9-J20) may be a positive or negative number and then I need a value in J21 based on positive or negative. If positive, I need the sum. If negative, I need the cell to be 0.
If J20 is zero I need the value to be the sum of another cell J23
These are the 3 formulas that will give the correct answers but I need it to be an OR to each formula to get the correct answer in cell J25
J25
=IF(J20>J9,J9,J9-J20)+J22+J23+E18 works if the deductible is LARGE
=IF(J20>J9,J9,J9-J20)+J22+J23+E18 works if the deductible is smaller than the charges
=IF(j20=0,J23)+E18 works if the deductible is zero and there is a copay
Is this even possible to solve for?
Thank you,
Donna
I need a formula to give 3 different answers based on the value of one cell in a worksheet that could change.J19 is the variable cell in my worksheet. The value of (J9-J20) may be a positive or negative number and then I need a value in J21 based on positive or negative. If positive, I need the sum. If negative, I need the cell to be 0.If J20 is zero I need the value to be the sum of another cell J23 These are the 3 formulas that will give the correct answers but I need it to be an OR to each formula to get the correct answer in cell J25J25=IF(J20>J9,J9,J9-J20)+J22+J23+E18 works if the deductible is LARGE=IF(J20>J9,J9,J9-J20)+J22+J23+E18 works if the deductible is smaller than the charges=IF(j20=0,J23)+E18 works if the deductible is zero and there is a copay Is this even possible to solve for?Thank you,Donna Read More
Announcing general availability of real-time diarization
We are excited to announce Generally Available of real-time diarization which is an enhanced add-on feature of Azure Speech service. With this feature, you can get live (in real time) speech to text transcription by speakers (Guest1, Guest2, Guest3, etc.), so that you know which speaker was speaking a particular part transcribed speech conversation transcription.
What’s Real-time Diarization
The diarization is a feature that differentiates speakers in an audio. Real-time diarization is capable of distinguishing speakers’ voices through single channel audio in streaming mode. Diarization combined with speech to text functionality can provide transcription outputs that contain a speaker entry for each transcribed segment. The transcription output is tagged as GUEST1, GUEST2, GUEST3, etc. based on the number of speakers in the audio conversation. Below graph demonstrates the difference between the transcription results with and without diarization.
Use Cases and Scenarios
Real-time diarization can be used in a wide range of scenarios. Below lists some typical use cases. It can also be used to help with accessibility scenarios.
Live Conversation/Meeting Transcription
When speakers are all in the same room with a single microphone setup, do live transcription about which speaker (e.g. Guest-1, Guest-2, or Guest-3) talks about what transcription. Combined with GPT based on the diarized transcription, you can also do meeting/conversation summary, recap, or ask questions about the conversation/meeting, etc.
Microsoft Teams, for instance, is leveraging the diarization featrue to show live meeting transcription in Teams. Based on the meeting transcription, Microsoft Teams’ Copilot provides a meeting summary, recap, and many other cool features for people to interact Teams’ Copilot about the meetings.
Real-time Agent Asist
Use Speech Analytics (which is another new feature that Azure Speech Service provides at Build) with real-time diarization, you can do the live transcription analytics to help on the Agent Asist scenarios to optimally address the customers questions and concerns.
Live Caption and Subtitle (Translated Caption)
Show live captions or subtitles (translated captions) of meetings, videos, or audios.
What’s Improved Since Public Preview
After the public preview, we put in a lot of effort to improve the diarization quality. This is the major feedback we heard from Preview users regarding the quality of real-time diarization. We released a new diarization model and improved diarization quality by ~3% on WDER. In addition, we removed the limitation of 7 seconds of continuous audio data from a single speaker. In the Preview version, when a speaker first talks, the diarization would start to perform with better quality after the 7 seconds of continuous audio of the speaker. Now in GA version, we don’t have this limitation anymore.
Early Adopters from Diverse Area
So far, we have over a thousand customers from diverse industries trying out real-time diarization on a variety of scenarios. Below are some examples.
Medical
Live transcription between doctor and patient, and transcription analytics
Banking
Live meeting transcription
Telecommunication
Conversation transcription, summarization, transcription analytics
Legal
App to assist trial and appellate attorney who are preparing for oral arguments (e.g. capture the attorneys’ and judges’ positions during mock oral arguments, etc.)
Try it Out
To try out the real-time diarization, you can go to Speech Studio (Speech Studio – Real-time speech to text (microsoft.com)) and do the following steps (shown in the below screenshot) to experience the feature,
Click on “Show advanced options”.
Use the “Speaker diarization” toggle to turn on or off the real-time diarization.
Real-time diarization is available to all the regions that Azure Speech Service supports. It is released through Speech SDK (version 1.31.0 or higher). The feature is available in the following SDKs.
C#,
C++
Java
JavaScript
Python
Please feel free to follow the Quickstart: Real-time diarization to start experiencing the feature.
Microsoft Tech Community – Latest Blogs –Read More
No access to reservationpage
Best,
1 of me co workers has access to 4 reservation page in bookings with admin rights. When she accesses a reservation page she keeps getting the message that she has no permissions to the reservation page and that she has 3 out of 4 reservation page no access to it.
i have already done the following:
– cleared browser history.
– cleared cache.
– assigned the same permissions to each reservation page.
– tried on a mobile device.
But every step above has not helped anything.
Can anyone help me to get the persistent problem solved.
Regards,
Robby
Best,1 of me co workers has access to 4 reservation page in bookings with admin rights. When she accesses a reservation page she keeps getting the message that she has no permissions to the reservation page and that she has 3 out of 4 reservation page no access to it. i have already done the following:- cleared browser history.- cleared cache.- assigned the same permissions to each reservation page.- tried on a mobile device. But every step above has not helped anything.Can anyone help me to get the persistent problem solved. Regards, Robby Read More
Teams does not manage properly External Monitor on iPad
I’ve an iPad Air 5 that supports display to external HDMI monitor through usb-c port.
When i configure the external monitor as extendend display (not mirrored), Teams seems unable to manage properly that configuration. More in detail i’ve observed the following issues:
1. When Teams is already open, switching to the external monitor has the effect that it’s not possible to join meetings (tapping/clicking on Join meeting has no effect)
2. closing the application and re-opeing it (with external monitor connected) sometimes let to join the meeting, but the app becomes unusable because the meeting window is shown in the external monitor as very small window (and no other apps can apparently co-exist with it), while the “main” Teams application is on iPad dispaly. When the main Teams application is moved to external monitor, the “meeting” window disappears
This is annoying, each time i’ve to join a meeting i’ve to detach the cable connection to external monitor if i want to run the meeting properly…
I’ve an iPad Air 5 that supports display to external HDMI monitor through usb-c port. When i configure the external monitor as extendend display (not mirrored), Teams seems unable to manage properly that configuration. More in detail i’ve observed the following issues: 1. When Teams is already open, switching to the external monitor has the effect that it’s not possible to join meetings (tapping/clicking on Join meeting has no effect)2. closing the application and re-opeing it (with external monitor connected) sometimes let to join the meeting, but the app becomes unusable because the meeting window is shown in the external monitor as very small window (and no other apps can apparently co-exist with it), while the “main” Teams application is on iPad dispaly. When the main Teams application is moved to external monitor, the “meeting” window disappearsThis is annoying, each time i’ve to join a meeting i’ve to detach the cable connection to external monitor if i want to run the meeting properly… Read More
How to export all sheets as separate files: sheetName.pdf from workbook?
Hello, we’re using Microsoft Excel for mac version 16.84. We can create workbooks with sheets. But, we cannot see how to export all workbook sheets as separate pdfs documents, with their names as file names.
It is possible to export the whole workbook as a pdf and then drag the individual pages out as their own .pdf docuuments; but they are saved as 1(dragged).pdf 2(dragged.pdf) etc. We lose the name.
Has anybody else had this issue? Is there any way to export them with their names, as in previous versions of the software?
Thanks all.
Hello, we’re using Microsoft Excel for mac version 16.84. We can create workbooks with sheets. But, we cannot see how to export all workbook sheets as separate pdfs documents, with their names as file names. It is possible to export the whole workbook as a pdf and then drag the individual pages out as their own .pdf docuuments; but they are saved as 1(dragged).pdf 2(dragged.pdf) etc. We lose the name.Has anybody else had this issue? Is there any way to export them with their names, as in previous versions of the software? Thanks all. Read More
Daily Agenda Mail
Hello,
We are using the new Outlook App and are wondering where the option “Receive daily agenda e-mail” is.
Has this feature been removed or where can I find it?
Hello,We are using the new Outlook App and are wondering where the option “Receive daily agenda e-mail” is.Has this feature been removed or where can I find it? Read More
Microsoft Security Development Lifecycle (SDL)
Security and privacy should never be an afterthought when developing software. A formal process must become standard practice to ensure they are considered at all points of the product’s lifecycle. The rise of software supply chain attacks—including the XZ Utils, SolarWinds attack and Log4j vulnerabilities—highlights the critical need to build security into the software development process, from the ground up.
Over the last 20 years, there have been many improvements to the security development lifecycle (SDL) reflecting changes in internal tools and processes. We are excited to announce that this week, we have updated the security practices on the SDL website, and we will continue to update this site with new information on a regular basis.
Microsoft Security Development Lifecycle (SDL) Timeline
In the early 2000s, personal computers (PCs) were becoming increasingly common in the home and the internet was gaining more widespread use. This led to a rise in malicious software looking to take advantage of users connecting their home PCs to the internet. It quickly became evident that protecting users from malicious software required a fundamentally different approach to security.
In January 2002, Microsoft launched its Trustworthy Computing initiative to help ensure Microsoft products and services were built to be inherently highly secure, available, reliable, and with business integrity.
In 2004, the Microsoft Security Development Lifecycle (SDL) was born out of the Trustworthy Computing initiative and introduced security throughout all phases of software development at Microsoft. The SDL began life to bake security and privacy principles into the culture of Microsoft. It originally consisted of a relatively small set of requirements aligned to each phase of the waterfall model of software development, aimed at preventing developers from inadvertently introducing vulnerabilities into their code. It also included a few supporting tools that could identify what was, at the time, a short list of known issues. Back then, the SDL was updated annually. Products were released every two to three years and a final security review to confirm that best practices had been followed was a great advancement from existing approaches.
We no longer live in a world where software releases are months or even years apart. The cloud and continuous integration/continuous deployment (CI/CD) practices, enable services to be shipped daily or sometimes multiple times a day. The software supply chain has grown more complicated as more dependencies on open-source software are created. And while the SDL has continued to evolve to keep up with these changes and the shifting threat landscape, it has also grown more complex.
SDL Now
Secure software development still requires embedding security into each step of the development process, from the design and build stages to deployment and operations(run). The SDL now continuously measures security throughout the development lifecycle. SDL continues to evolve with the changing landscape of cloud computing, AI, and CI/CD automation. As seen in the image below, security controls are integrated to ensure continuous enforcement of zero trust principles and governance from Design stage all the way to Run.
The image below shows key security capabilities in each of the stages of the development lifecycle.
The SDL is the approach Microsoft uses to integrate security into DevOps processes (sometimes called a DevSecOps approach). You can use this SDL guidance and documentation to adapt this approach and practices to your organization.
The practices described in the SDL approach can be applied to all types of software development and all platforms from classic waterfall through to modern DevOps approaches and can be generally applied across:
Software – whether you are developing software code for firmware, AI applications, operating systems, drivers, IoT Devices, mobile device apps, web services, plug-ins or applets, hardware microcode, low-code/no-code apps, or other software formats. Note that most practices in the SDL are applicable to secure computer hardware development as well.
Platforms – whether the software is running on a ‘serverless’ platform approach, on an on-premises server, a mobile device, a cloud hosted VM, a user endpoint, as part of a Software as a Service (SaaS) application, a cloud edge device, an IoT device, or anywhere else.
The SDL recommends 10 security practices to incorporate into your development workflows. Applying the 10 security practices of SDL is an ongoing process of improvement so a key recommendation is to begin from some point and keep enhancing as you proceed. This continuous process involves changes to culture, strategy, processes, and technical controls as you embed security skills and practices into DevOps workflows.
Next steps
Head over to the updated SDL site and start adapting the SDL guidance and practices to your organization.
Microsoft Tech Community – Latest Blogs –Read More
File properties information
Hi all,
I’m looking for the ways to scan the repo to get the file properties information in any environment using Microsoft solutions.
File properties information such as file name, file type, size, owner, last modified etc
Regards
Aaron
Hi all,I’m looking for the ways to scan the repo to get the file properties information in any environment using Microsoft solutions.File properties information such as file name, file type, size, owner, last modified etc RegardsAaron Read More
Policy personal data text for each service
Hello:
I am building a Bookings page where customers can book different services. I use custom fields to get extra information. For legal reasons, I must include a specific policy personal data text in each service, different in each case. How can I add this text to each service, underneath the custom fields?
Thank you very much.
Hello: I am building a Bookings page where customers can book different services. I use custom fields to get extra information. For legal reasons, I must include a specific policy personal data text in each service, different in each case. How can I add this text to each service, underneath the custom fields? Thank you very much. Read More
UAC during OOBE (after switching from Admin to Standard user in Windows Autopilot)
We switched settings in Windows Autopilot to make the user a standard user instead of an admin. Now, during OOBE I am asked multiple times to execute a PowerShell script as an admin.
What causes this behavior and how to prevent?
We switched settings in Windows Autopilot to make the user a standard user instead of an admin. Now, during OOBE I am asked multiple times to execute a PowerShell script as an admin. What causes this behavior and how to prevent? Read More
Lesson Learned #491: Monitoring Blocking Issues in Azure SQL Database
Time ago, we wrote an article Lesson Learned #22: How to identify blocking issues? today,
I would like to enhance this topic by introducing a monitoring system that expands on that guide. This PowerShell script not only identifies blocking issues but also calculates the total, maximum, average, and minimum blocking times.
My idea is to run this PowerShell script, which executes T-SQL queries to identify blocking issues, showing the impact of the blocking and the blocking chains every 5 seconds. The script will save the details in a file for further review.
# Configure the connection string and folder for log file
$connectionString = “Server=tcp:servername.database.windows.net,1433;Database=dbname;User ID=username;Password=pwd!;Encrypt=true;Connection Timeout=30;”
$Folder = “c:SQLDAta”
# Function to get and display blocking statistics
function Get-BlockingStatistics {
$query = “
select conn.session_id as blockerSession,
conn2.session_id as BlockedSession,
req.wait_time as Waiting_Time_ms,
cast((req.wait_time/1000.) as decimal(18,2)) as Waiting_Time_secs,
cast((req.wait_time/1000./60.) as decimal(18,2)) as Waiting_Time_mins,
t.text as BlockerQuery,
t2.text as BlockedQuery,
req.wait_type from sys.dm_exec_requests as req
inner join sys.dm_exec_connections as conn on req.blocking_session_id=conn.session_id
inner join sys.dm_exec_connections as conn2 on req.session_id=conn2.session_id
cross apply sys.dm_exec_sql_text(conn.most_recent_sql_handle) as t
cross apply sys.dm_exec_sql_text(conn2.most_recent_sql_handle) as t2
“
$connection = Connect-WithRetry -connectionString $connectionString -maxRetries 5 -initialDelaySeconds 2
if ($connection -ne $null)
{
$blockings = Execute-SqlQueryWithRetry -connection $connection -query $query -maxRetries 5 -initialDelaySeconds 2
$connection.Close()
}
if ($blockings.Count -gt 0) {
$totalBlockings = $blockings.Count
$maxWaitTime = $blockings | Measure-Object -Property WaitTimeSeconds -Maximum | Select-Object -ExpandProperty Maximum
$minWaitTime = $blockings | Measure-Object -Property WaitTimeSeconds -Minimum | Select-Object -ExpandProperty Minimum
$avgWaitTime = $blockings | Measure-Object -Property WaitTimeSeconds -Average | Select-Object -ExpandProperty Average
logMsg “Total blockings: $totalBlockings” (1)
logMsg “Maximum blocking time (seconds): $maxWaitTime” (2)
logMsg “Minimum blocking time (seconds): $minWaitTime” (2)
logMsg “Average blocking time (seconds): $avgWaitTime” (2)
logMsg “– — — — Blocking chain details: — — ” (1)
foreach ($blocking in $blockings)
{
logMsg “Blocked Session ID: $($blocking.SessionId)”
logMsg “Wait Time (seconds): $($blocking.WaitTimeSeconds)”
logMsg “Blocker Session ID: $($blocking.BlockingSessionId)”
logMsg “Blocked SQL Text: $($blocking.SqlText)”
logMsg “Blocker SQL Text: $($blocking.BlockingSqlText)”
logMsg “———————————————“
}
} else {
logMsg “No blockings found at this time.”
}
}
# Function to execute a SQL query with retry logic
function Execute-SqlQueryWithRetry {
param (
[System.Data.SqlClient.SqlConnection]$connection,
[string]$query,
[int]$maxRetries = 5,
[int]$initialDelaySeconds = 2
)
$attempt = 0
$success = $false
$blockings = @()
while (-not $success -and $attempt -lt $maxRetries) {
try {
$command = $connection.CreateCommand()
$command.CommandText = $query
$reader = $command.ExecuteReader()
while ($reader.Read()) {
$blockingE = New-Object PSObject -Property @{
SessionId = $reader[“BlockedSession”]
WaitTimeSeconds = $reader[“Waiting_Time_secs”]
BlockingSessionId = $reader[“BlockerSession”]
SqlText = $reader[“BlockedQuery”]
BlockingSqlText = $reader[“BlockerQuery”]
}
$blockings+=$blockingE
}
$success = $true
} catch {
$attempt++
if ($attempt -lt $maxRetries) {
logMsg “Query execution attempt $attempt failed. Retrying in $initialDelaySeconds seconds…” 2
Start-Sleep -Seconds $initialDelaySeconds
$initialDelaySeconds *= 2 # Exponential backoff
} else {
logMsg “Query execution attempt $attempt failed. No more retries.” 2
throw $_
}
}
}
return ,($blockings)
}
#——————————–
#Log the operations
#——————————–
function logMsg
{
Param
(
[Parameter(Mandatory=$true, Position=0)]
[string] $msg,
[Parameter(Mandatory=$false, Position=1)]
[int] $Color,
[Parameter(Mandatory=$false, Position=2)]
[boolean] $Show=$true,
[Parameter(Mandatory=$false, Position=3)]
[string] $sFileName,
[Parameter(Mandatory=$false, Position=4)]
[boolean] $bShowDate=$true,
[Parameter(Mandatory=$false, Position=5)]
[boolean] $bSaveOnLogFile=$true
)
try
{
if($bShowDate -eq $true)
{
$Fecha = Get-Date -format “yyyy-MM-dd HH:mm:ss”
$msg = $Fecha + ” ” + $msg
}
If( TestEmpty($SFileName) )
{
Write-Output $msg | Out-File -FilePath $LogFile -Append
}
else
{
Write-Output $msg | Out-File -FilePath $sFileName -Append
}
$Colores=”White”
$BackGround =
If($Color -eq 1 )
{
$Colores =”Cyan”
}
If($Color -eq 3 )
{
$Colores =”Yellow”
}
if($Color -eq 2 -And $Show -eq $true)
{
Write-Host -ForegroundColor White -BackgroundColor Red $msg
}
else
{
if($Show -eq $true)
{
Write-Host -ForegroundColor $Colores $msg
}
}
}
catch
{
Write-Host $msg
}
}
#——————————–
#Validate Param
#——————————–
function TestEmpty($s)
{
if ([string]::IsNullOrWhitespace($s))
{
return $true;
}
else
{
return $false;
}
}
#————————————————————–
#Create a folder
#————————————————————–
Function CreateFolder
{
Param( [Parameter(Mandatory)]$Folder )
try
{
$FileExists = Test-Path $Folder
if($FileExists -eq $False)
{
$result = New-Item $Folder -type directory
if($result -eq $null)
{
logMsg(“Imposible to create the folder ” + $Folder) (2)
return $false
}
}
return $true
}
catch
{
return $false
}
}
function GiveMeFolderName([Parameter(Mandatory)]$FolderSalida)
{
try
{
$Pos = $FolderSalida.Substring($FolderSalida.Length-1,1)
If( $Pos -ne “” )
{return $FolderSalida + “”}
else
{return $FolderSalida}
}
catch
{
return $FolderSalida
}
}
#——————————-
#Create a folder
#——————————-
Function DeleteFile{
Param( [Parameter(Mandatory)]$FileName )
try
{
$FileExists = Test-Path $FileNAme
if($FileExists -eq $True)
{
Remove-Item -Path $FileName -Force
}
return $true
}
catch
{
return $false
}
}
# Function to connect to the database with retry logic
function Connect-WithRetry {
param (
[string]$connectionString,
[int]$maxRetries = 5,
[int]$initialDelaySeconds = 2
)
$attempt = 0
$connection = $null
while (-not $connection -and $attempt -lt $maxRetries) {
try {
$connection = New-Object System.Data.SqlClient.SqlConnection
$connection.ConnectionString = $connectionString
$connection.Open()
} catch {
$attempt++
if ($attempt -lt $maxRetries) {
logMsg “Connection attempt $attempt failed. Retrying in $initialDelaySeconds seconds…” 2
Start-Sleep -Seconds $initialDelaySeconds
$initialDelaySeconds *= 2 # Exponential backoff
} else {
logMsg “Connection attempt $attempt failed. No more retries.” 2
throw $_
}
}
}
return $connection
}
clear
$result = CreateFolder($Folder) #Creating the folder that we are going to have the results, log and zip.
If( $result -eq $false)
{
write-host “Was not possible to create the folder”
exit;
}
$sFolderV = GiveMeFolderName($Folder) #Creating a correct folder adding at the end .
$LogFile = $sFolderV + “Blockings.Log” #Logging the operations.
logMsg(“Deleting Operation Log file”) (1)
$result = DeleteFile($LogFile) #Delete Log file
logMsg(“Deleted Operation Log file”) (1)
# Loop to run the monitoring every 5 seconds
while ($true) {
Clear-Host
Get-BlockingStatistics
Start-Sleep -Seconds 5
}
Please note that this script is provided as-is and without any warranty. Use it at your own risk. Always test scripts in a development environment before deploying them to production.
Microsoft Tech Community – Latest Blogs –Read More
Introducing the Unified Azure Maps Experience
We are thrilled to announce the unification of Bing Maps for Enterprise (BME) with Azure Maps, marking a significant milestone in our geospatial services at Microsoft. Azure Maps now boasts a robust stack of geospatial offerings, leveraging the powerful capabilities of Microsoft Maps, which also drives Bing Maps (our consumer maps experience). Over the past year, our team has dedicated significant time and effort to combine the strengths of Bing Maps for Enterprise into Azure Maps, enhancing our global quality and coverage.
One of the major enhancements is the adoption of vector tiles in Azure Maps for a more responsive map experience. When utilizing Azure Maps in your solutions, you not only leverage the security and compliance advantages of Azure but also benefit from the extensive quality and coverage provided by Microsoft Maps.
This unification ensures that users of Azure Maps receive a comprehensive mapping solution backed by the unparalleled strengths of Azure’s infrastructure, Microsoft Maps’ data quality and coverage, and many of the same advanced geospatial capabilities that Bing Maps for Enterprise customers depend on. We are excited about the opportunities this integration presents and look forward to continuing to deliver innovative mapping solutions to our customers worldwide.
Azure Maps has many of the same features that BME customers have come to rely on. Nevertheless, this unification also introduces exciting new features to Azure Maps, such as weather APIs, private indoor maps, multiple authentication methods, geolocation service, and robust privacy and compliance benefits.
Ready to Make the Move?
For customers that are using Bing Maps for Enterprise and are migrating over to Azure Maps, some development will be needed. To help you in this transition period, we have written migration documents for our REST APIs and as well for the Azure Maps web control. Also, a good start is our Azure Maps samples site where you can find not only samples for many scenarios, but also the source code.
More resources about Azure Maps can be found here:
Azure Maps Documenation
Azure Maps Samples
Azure Maps Blog
Microsoft Q&A for Azure Maps
Microsoft Tech Community – Latest Blogs –Read More
Creating policy for Defender for Servers
Hello,
Some time ago we enabled Defender for Servers for virtual machines in our tenant. Some users reported me that DfS is using a lot of CPU usage in their machines and it blocks some files and proccesses from being executed. I have questions:
– can we create a policy to set maximum CPU usage for Defender for Servers for specified subscription?
– can we disable quarantine and any other detection for selectied machines to ALERT only but not take any action?
I checked we can set CPU usage by PS command but these machines are removed and added every week, so we would like to automate this process.
Hello,Some time ago we enabled Defender for Servers for virtual machines in our tenant. Some users reported me that DfS is using a lot of CPU usage in their machines and it blocks some files and proccesses from being executed. I have questions:- can we create a policy to set maximum CPU usage for Defender for Servers for specified subscription?- can we disable quarantine and any other detection for selectied machines to ALERT only but not take any action?I checked we can set CPU usage by PS command but these machines are removed and added every week, so we would like to automate this process. Read More
exchange online and on prim
i need help with exchange on-prim not emailing to a user who account was crated in active directory and email account is on office 365 and exchange online. i was able to get remote enabled and guild key but if a copier sends a message thought the server the user does not get it. i tried changing the legacy dn but did not work the mail trace for every one else shows up as copanyvl.mail.onmicrosoft.com but this user in question do not have this happen for so looking for help. any idea would greatly appreciate
exchange 2019 is the on prim
i need help with exchange on-prim not emailing to a user who account was crated in active directory and email account is on office 365 and exchange online. i was able to get remote enabled and guild key but if a copier sends a message thought the server the user does not get it. i tried changing the legacy dn but did not work the mail trace for every one else shows up as copanyvl.mail.onmicrosoft.com but this user in question do not have this happen for so looking for help. any idea would greatly appreciate exchange 2019 is the on prim Read More
Azure DevOps Configuration required on new work item types (Resolved)
** Business Requirement **
Create a new iteration black log type ‘Impediment’
As part of the configuration, if a sprint board has column options other than (New | Active | Resolved | Closed), then a message of “Configuration required” will be displayed once the new work item type has been added to Azure DevOps.
Sprint board with a custom column:
Adding of the new Task type:
Sprint board with a custom column after work item type has been added:
In most cases this can easily be resolved by going to ‘Column Options’ and clicking save to update the sprint board with the correct status.
However, in our case we needed to do this across a big organization with a lot of boards, some with custom columns. As this is a disruptive action to users that would need to go to their sprint boards to update the new status.
An automated method can be used through the API. Though the documentation is vague on what the json body requirement is the engineers from Microsoft was able to provide us with the required structure.
[
{
“mappings”: [
{
“workItemType“: “Task”,
“state”: “New”
},
{
“workItemType“: “Bug”,
“state”: “New”
},
{
“state”: “New”,
“workItemType“: “Impediment”
}
],
“order”: 0,
“name”: “New”,
“id”: “”
}
]
I created the following PowerShell script to add the new work item states to sprint boards with custom added columns.
# Define parameters for the script
Param(
[string]$organisation = “AzureDevOps-Organisation-Name”,
[string]$project = “AzureDevOps-Project-Name”,
[string]$user = “email address removed for privacy reasons”,
[string]$token = “Your-PAT” # Personal Access Token
)
# Convert username and token to Base64 for Basic Authentication
$base64AuthInfo = [Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes((“{0}:{1}” -f $user,$token)))
# Define headers for the API request
$headers = @{Authorization=(“Basic {0}” -f $base64AuthInfo)}
# Define the URL for the Teams API
$TeamUrl = “https://dev.azure.com/$($organisation)/_apis/projects/$project/teams?api-version=7.1-preview.3”
# Send a GET request to the Teams API
$TeamRequest = Invoke-RestMethod -Uri $TeamUrl -Method Get -ContentType “application/json” -Headers $headers
# Loop through each team in the response
foreach ($Team in $TeamRequest.value) {
# Define the URL for the Task Board Columns API for the current team
$TaskBoardUrl = “https://dev.azure.com/$($organisation)/$project/$($Team.id)/_apis/work/taskboardcolumns?api-version=7.1-preview.1”
# Send a GET request to the Task Board Columns API
$TaskBoardResult = Invoke-RestMethod -Uri $TaskBoardUrl -Method Get -ContentType “application/json” -Headers $headers
# Loop through each column in the response
foreach ($Column in $TaskBoardResult.columns)
{
# If the column name does not match ‘New’, ‘Active’, ‘Resolved’, or ‘Closed’
if ($Column.name -notmatch ‘New|Active|Resolved|Closed’)
{
# Define an empty array for the columns
$columnsArray = @()
# Define valid states
$validStates = @(“New”, “Active”, “Closed”)
# Loop through each column in the response
$TaskBoardResult.columns | ForEach-Object {
# Create a new object for the column
$column = New-Object PSObject -Property @{
id = “”
name = $_.name
order = $_.order
mappings = $_.mappings
}
# Filter the mappings for the column
$column.mappings = $column.mappings | Where-Object { $_.workItemType -ne “Impediment” -or ($_.workItemType -eq “Impediment” -and $_.state -eq $column.name) }
# If the column name is in the valid states
if ($column.name -in $validStates) {
# Create a new mapping for the column
$newMapping = New-Object PSObject -Property @{
state = $column.name
workItemType = “Impediment”
}
# Add the new mapping to the column
$column.mappings += $newMapping
}
# Add the column to the array
$columnsArray += $column
}
# Convert the array to JSON
$jsonBody = $columnsArray | ConvertTo-Json -Depth 10
# Define the URL for the Task Board Columns API for updating
$TaskBoardUrlUpdate = “https://dev.azure.com/$($organisation)/$project/$($Team.id)/_apis/work/taskboardcolumns?api-version=7.1-preview.1”
# Send a PUT request to the Task Board Columns API to update the columns
$ResultCall = Invoke-RestMethod -Uri $TaskBoardUrlUpdate -Method PUT -Body $jsonBody -ContentType “application/json” -Headers $headers
# Print the validation message and columns from the response
$ResultCall.validationMesssage
$ResultCall.columns
}
}
}
Result after running the script:
** Business Requirement ** Create a new iteration black log type ‘Impediment’ As part of the configuration, if a sprint board has column options other than (New | Active | Resolved | Closed), then a message of “Configuration required” will be displayed once the new work item type has been added to Azure DevOps. Sprint board with a custom column:Adding of the new Task type: Sprint board with a custom column after work item type has been added:In most cases this can easily be resolved by going to ‘Column Options’ and clicking save to update the sprint board with the correct status. However, in our case we needed to do this across a big organization with a lot of boards, some with custom columns. As this is a disruptive action to users that would need to go to their sprint boards to update the new status. An automated method can be used through the API. Though the documentation is vague on what the json body requirement is the engineers from Microsoft was able to provide us with the required structure. Source: https://learn.microsoft.com/en-us/rest/api/azure/devops/work/taskboard-columns/update?view=azure-devops-rest-7.1#taskboardcolumn [ { “mappings”: [ { “workItemType”: “Task”, “state”: “New” }, { “workItemType”: “Bug”, “state”: “New” }, { “state”: “New”, “workItemType”: “Impediment” } ], “order”: 0, “name”: “New”, “id”: “” } ] I created the following PowerShell script to add the new work item states to sprint boards with custom added columns. # Define parameters for the script
Param(
[string]$organisation = “AzureDevOps-Organisation-Name”,
[string]$project = “AzureDevOps-Project-Name”,
[string]$user = “email address removed for privacy reasons”,
[string]$token = “Your-PAT” # Personal Access Token
)
# Convert username and token to Base64 for Basic Authentication
$base64AuthInfo = [Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes((“{0}:{1}” -f $user,$token)))
# Define headers for the API request
$headers = @{Authorization=(“Basic {0}” -f $base64AuthInfo)}
# Define the URL for the Teams API
$TeamUrl = “https://dev.azure.com/$($organisation)/_apis/projects/$project/teams?api-version=7.1-preview.3”
# Send a GET request to the Teams API
$TeamRequest = Invoke-RestMethod -Uri $TeamUrl -Method Get -ContentType “application/json” -Headers $headers
# Loop through each team in the response
foreach ($Team in $TeamRequest.value) {
# Define the URL for the Task Board Columns API for the current team
$TaskBoardUrl = “https://dev.azure.com/$($organisation)/$project/$($Team.id)/_apis/work/taskboardcolumns?api-version=7.1-preview.1”
# Send a GET request to the Task Board Columns API
$TaskBoardResult = Invoke-RestMethod -Uri $TaskBoardUrl -Method Get -ContentType “application/json” -Headers $headers
# Loop through each column in the response
foreach ($Column in $TaskBoardResult.columns)
{
# If the column name does not match ‘New’, ‘Active’, ‘Resolved’, or ‘Closed’
if ($Column.name -notmatch ‘New|Active|Resolved|Closed’)
{
# Define an empty array for the columns
$columnsArray = @()
# Define valid states
$validStates = @(“New”, “Active”, “Closed”)
# Loop through each column in the response
$TaskBoardResult.columns | ForEach-Object {
# Create a new object for the column
$column = New-Object PSObject -Property @{
id = “”
name = $_.name
order = $_.order
mappings = $_.mappings
}
# Filter the mappings for the column
$column.mappings = $column.mappings | Where-Object { $_.workItemType -ne “Impediment” -or ($_.workItemType -eq “Impediment” -and $_.state -eq $column.name) }
# If the column name is in the valid states
if ($column.name -in $validStates) {
# Create a new mapping for the column
$newMapping = New-Object PSObject -Property @{
state = $column.name
workItemType = “Impediment”
}
# Add the new mapping to the column
$column.mappings += $newMapping
}
# Add the column to the array
$columnsArray += $column
}
# Convert the array to JSON
$jsonBody = $columnsArray | ConvertTo-Json -Depth 10
# Define the URL for the Task Board Columns API for updating
$TaskBoardUrlUpdate = “https://dev.azure.com/$($organisation)/$project/$($Team.id)/_apis/work/taskboardcolumns?api-version=7.1-preview.1”
# Send a PUT request to the Task Board Columns API to update the columns
$ResultCall = Invoke-RestMethod -Uri $TaskBoardUrlUpdate -Method PUT -Body $jsonBody -ContentType “application/json” -Headers $headers
# Print the validation message and columns from the response
$ResultCall.validationMesssage
$ResultCall.columns
}
}
} Result after running the script: Read More
Azure Devops Library Variables Audit
HI,
This maybe a difficult question to answer but we are currently developing an ALM strategy with regards to D365 CE. We have had historically 3rd party suppliers deliver our customisations and support via Devops pipelines. The 3rd party suppliers have used various variable groups, some of which we are unsure what they are used for.
Are there any tools which can determine where these variables are used ? Or has anyone with any experience of auditing variable groups used tools or processes which they can direct me towards ?
Any help or advice is appreciated. TIA
HI, This maybe a difficult question to answer but we are currently developing an ALM strategy with regards to D365 CE. We have had historically 3rd party suppliers deliver our customisations and support via Devops pipelines. The 3rd party suppliers have used various variable groups, some of which we are unsure what they are used for.Are there any tools which can determine where these variables are used ? Or has anyone with any experience of auditing variable groups used tools or processes which they can direct me towards ? Any help or advice is appreciated. TIA Read More
No output for Invoke-MgGraphRequest for user presence
Hi All!
I am experiencing some odd behaviour with a Invoke-MgGraphRequest and an Azure Runbook and could do with a nudge in the right direction.
I am trying to report on my Teams presence using GraphAPI. When I use the following code, it works, presence returned:
Invoke-MgGraphRequest -method GET -Uri “https://graph.microsoft.com/v1.0/communications/presences/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx”
But, When I try and assign this output to a variable (so it can be passed to a SharePoint list) I don’t get any output:
$returned=Invoke-MgGraphRequest -method GET -Uri “https://graph.microsoft.com/v1.0/communications/presences/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx”
$returned.value | ForEach-Object {$_.availability}
Am I doing something wrong, or is this expected behaviour?
Hi All! I am experiencing some odd behaviour with a Invoke-MgGraphRequest and an Azure Runbook and could do with a nudge in the right direction.I am trying to report on my Teams presence using GraphAPI. When I use the following code, it works, presence returned: Invoke-MgGraphRequest -method GET -Uri “https://graph.microsoft.com/v1.0/communications/presences/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx” But, When I try and assign this output to a variable (so it can be passed to a SharePoint list) I don’t get any output:$returned=Invoke-MgGraphRequest -method GET -Uri “https://graph.microsoft.com/v1.0/communications/presences/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx”
$returned.value | ForEach-Object {$_.availability}Am I doing something wrong, or is this expected behaviour? Read More
Searching Shared Mailboxes on MacOS is not showing same result as separate accounts
When I have 4 normal mail accounts in Outlook on MacOS, I can search through all Inboxes (‘Alle postvakken’) and will find all related mails over all mailboxes.
When I change those mailboxes to Share Mailboxes and add them to Outlook in that way, the searching is no longer working as before. From that moment on I only find some results in the currently selected mailbox.
How can we get Outlook (on MacOS) to search through all delegated mailboxes that are vissible in Outlook?
When I have 4 normal mail accounts in Outlook on MacOS, I can search through all Inboxes (‘Alle postvakken’) and will find all related mails over all mailboxes. When I change those mailboxes to Share Mailboxes and add them to Outlook in that way, the searching is no longer working as before. From that moment on I only find some results in the currently selected mailbox. How can we get Outlook (on MacOS) to search through all delegated mailboxes that are vissible in Outlook? Read More
Big Change Coming in Authentication for Outlook Add-ins
On April 9, 2024, Microsoft announced a big change in authentication for Outlook add-ins. It’s likely that people don’t realize the kind of change that’s coming. The change removes legacy Exchange authentication methods and replaces them with Nested App Authentication (NAA). Time is running short for developers to upgrade and test their code and Microsoft 365 tenants to get ready for the changeover.
https://office365itpros.com/2024/05/21/outlook-add-in-authentication/
On April 9, 2024, Microsoft announced a big change in authentication for Outlook add-ins. It’s likely that people don’t realize the kind of change that’s coming. The change removes legacy Exchange authentication methods and replaces them with Nested App Authentication (NAA). Time is running short for developers to upgrade and test their code and Microsoft 365 tenants to get ready for the changeover.
https://office365itpros.com/2024/05/21/outlook-add-in-authentication/ Read More
Calls save as draft under Teams chat
Hello
Please i need your help on this issue.
every calls saves as draft under Teams chat
When checking nothing did appear in the Teams web
The issue is happening on just incoming calls.
The issue is happening for users personal number, but it seems to be users with call queues
We cleared the Cache and they are still coming through
Hello Please i need your help on this issue. every calls saves as draft under Teams chat When checking nothing did appear in the Teams web The issue is happening on just incoming calls. The issue is happening for users personal number, but it seems to be users with call queues We cleared the Cache and they are still coming through Read More