Category: Microsoft
Category Archives: Microsoft
Share your feedback on Microsoft’s certification renewal experience!!
We are trying to gather some feedback on the certification renewal process. If you’ve renewed a certification in the last 6 months, please complete this very short survey: https://forms.office.com/r/9jtvW1x8JV by May 2.
Please share this with anyone who may be eligible to share their feedback. This is your chance to share your thoughts and feedback and help us improve the experience.
We are trying to gather some feedback on the certification renewal process. If you’ve renewed a certification in the last 6 months, please complete this very short survey: https://forms.office.com/r/9jtvW1x8JV by May 2.
Please share this with anyone who may be eligible to share their feedback. This is your chance to share your thoughts and feedback and help us improve the experience. Read More
Disabling authentication methods in Entra having no effect
Fairly new to MS365 here and we’re trying to restrict which MFA methods our users can use. We want our users to be able to either use the Authenticator app or a FIDO2 key depending on their role, in addition to a TAP to do the initial login.
We’re testing disabling various methods via the Authentication methods page in Entra. As a representative test we set TAP to disabled and it gave an error when I attempted to issue a TAP for a user via the user’s Authentication methods page in Intune.
However we don’t get consistent results with other auth methods: Authenticator, Security key (FIDO2) and SMS. I put a specific group in the ‘Enable and target’ > ‘Exclude’ section for all 3 and was still able to configure Authenticator and a phone for SMS. When viewing the methods configured for the user, only the security key was listed under ‘unusable methods’; hence the policies for Authenticator and SMS appear to have no effect. Similar tests with just one auth method yield the same result.
Is there something we’re doing or understanding wrongly about how these policies work?
Fairly new to MS365 here and we’re trying to restrict which MFA methods our users can use. We want our users to be able to either use the Authenticator app or a FIDO2 key depending on their role, in addition to a TAP to do the initial login. We’re testing disabling various methods via the Authentication methods page in Entra. As a representative test we set TAP to disabled and it gave an error when I attempted to issue a TAP for a user via the user’s Authentication methods page in Intune. However we don’t get consistent results with other auth methods: Authenticator, Security key (FIDO2) and SMS. I put a specific group in the ‘Enable and target’ > ‘Exclude’ section for all 3 and was still able to configure Authenticator and a phone for SMS. When viewing the methods configured for the user, only the security key was listed under ‘unusable methods’; hence the policies for Authenticator and SMS appear to have no effect. Similar tests with just one auth method yield the same result. Is there something we’re doing or understanding wrongly about how these policies work? Read More
Reporting-Task Field
Hi,
I renamed Board Status in the ribbon to PMO STATUS REPORT as follows:
When I run my reports, I see “Board Status” not the new revised name.
Thus, can you go to the task field under reporting and make this change! Is it possible?
Thanks.
Hi,I renamed Board Status in the ribbon to PMO STATUS REPORT as follows:When I run my reports, I see “Board Status” not the new revised name.Thus, can you go to the task field under reporting and make this change! Is it possible? Thanks. Read More
NTLM vs Kerberos
Reposting – This article was originally written and posted by Nuno Tavares in 2018 .
In this post, we will go through the basics of NTLM and Kerberos. We will explain using the three Ws, covering what the main differences between them are, how to identify when a protocol is being used over the other, and why one is safer than the other.
So, without further ado. Here is the story…
Chapter 1: The What
What is NTLM?
NTLM is an authentication protocol. It was the default protocol used in old windows versions, but it’s still used today. If for any reason Kerberos fails, NTLM will be used instead.
NTLM has a challenge/response mechanism.
Here is how the NTLM flow works:
A user accesses a client computer and provides a domain name, user name, and a password.
The client computes a cryptographic hash of the password and discards the actual password. The client sends the user name to the server (in plaintext).
The server generates a 16-byte random number, called a challenge, and sends it back to the client.
The client encrypts this challenge with the hash of the user’s password and returns the result to the server. This is called the response.
The server sends the following three items to the domain controller:
– User Name
– Challenge sent to the client
– Response received from the client
The domain controller uses the user name to retrieve the hash of the user’s password. It compares the encrypted challenge with the response by the client (in step 4). If they are identical, authentication is successful, and the domain controller notifies the server.
The server then sends the appropriated response back to the client.
What is Kerberos?
Kerberos is an authentication protocol. It’s the default authentication protocol on Windows versions above W2k, replacing the NTLM authentication protocol.
Here is how the Kerberos flow works:
A user login to the client machine. The client does a plaintext request (TGT). The message contains: (ID of the user; ID of the requested service (TGT); The Client Net address (IP); validation lifetime)
The Authentication Server will check if the user exists in the KDC database.
a. If the user is found, it will randomly generate a key (session key) for use between the user and the Ticket Granting Server (TGS).
b. The Authentication Server will then send two messages back to the client:
– One is encrypted with the TGS secret key.
– One is encrypted with the Client secret key.
Note: The TGS Session Key is the shared key between the client and the TGS. The Client secret key is the hash of the user credentials (username+password).
The client decrypts the key and can logon, caching it locally. It also stores the encrypted TGT in his cache. When accessing a network resource, the client sends a request to the TGS with the resource name he wants to access, the user ID/timestamp and the cached TGT.
The TGS decrypts the user information and provides a service ticket and a service session key for accessing the service and sends it back to the Client once encrypted.
The client sends the request to the server (encrypted with the service ticket and the session-key)
The server decrypts the request and if its genuine, it provides service access.
Chapter 2: The When
How can we identify when we are using NTLM or Kerberos?
We can confirm the authentication being used by collecting a fiddler trace.
In the fiddler trace, we can see the requests being made in the Inspectors/Headers:
Kerberos:
NTLM:
If the request starts with Kerberos and fails, NTLM will be used instead. We can see the reply in the Headers as well:
Kerberos Dependencies:
Both the client and the server need to be running W2k or latter versions and be on the same, or trusted domain.
A SPN needs to exist in the AD for the domain account in use to run the service in which the client is authenticating.
Chapter 3: The Why
Why is Kerberos preferred?
NTLMv1 hashes could be cracked in seconds with today’s computing since they are always the same length and are not salted. NTLMv2 is an improvement, since its length varies and the hash is salted, however it’s still not very secure. Even though the hash is salted before it’s sent, it’s saved unsalted in a machine’s memory.
Furthermore, when we talk about NTLM, we talk about a challenge/response mechanism, which exposes its password to offline cracking when responding to the challenge.
Kerberos provides several advantages over NTLM:
More secure: No password stored locally or sent over the net.
Best performance: Improved performance over NTLM authentication.
Delegation support: Servers can impersonate clients and use the client’s security context to access a resource.
Simpler trust management: Avoids the need to have p2p trust relationships on multiple domains environment.
Supports MFA (Multi Factor Authentication)
The End
Microsoft Tech Community – Latest Blogs –Read More
Microsoft at TechCon365 and PWRCON – Seattle, WA (June 3-7, 2024)
“The thing I enjoyed most about the event was being around like-minded individuals discussing things that I deal with daily.”
– Previous TechCon365 attendee
What: TechCon365 & PWRCON – Seattle
Register today |Use the MSCMTY discount code to save $200 USD off registration.
Content: 2 Microsoft keynotes + 8 general sessions || 185+ overall sessions – 50 Microsoft-led sessions| 25+ full-day workshops
Microsoft is sending over 45+ product makers to present and engage.
Review all sessions + agenda view, workshops, and their full speaker lineup.
When & where: June 3-7, 2024
In-person: Seattle, WA – Seattle Convention Center
Twitter & hashtag: @TechCon365 | #TechCon365
Cost: $850 – $2,775 (Learn more about ticket pricing options)
At TechCon365 & PWRCON Seattle, a Microsoft 365 Conference & Power Platform Conference, the subject matter is divided into tracks and each session is designated for beginner, intermediate, advanced or expert. Tracks are offered for the following subjects: Microsoft 365 Apps, SharePoint, Azure / 365 Development, Microsoft Teams, Power Apps, Content Management, Power Users, Business Value, Implementation/Administration, Power Automate (Flow)/Workflow, Power BI – Business Intelligence, SharePoint Development, and more. Choose one complete learning track or mix and match based on what content best meets you and your organization’s current needs!
With 2 optional days of workshops and a 3-day conference, you can choose from over 130 sessions in multiple tracks and 25 workshops presented by Microsoft 365, SharePoint, Power Platform, Microsoft Teams, Viva, Azure, Copilot & AI’s top experts! Whether you are new to Microsoft 365, Power Platform and SharePoint or an experienced power user, admin or developer, TechCon365 has content designed to fit your experience level and area of interest.
See how the Microsoft 365, SharePoint Power Platform, Azure, and AI ecosystem is growing and evolving by speaking with technical experts from the local Microsoft field and diverse channels within the Microsoft Partner Network – all in our exhibit hall.
Microsoft keynotes, sessions, and workshops: Copilot/AI, SharePoint, OneDrive, Teams, Viva, Power Platform, D&I, and related technology
Microsoft keynotes and AMA
Hear from Microsoft leadership revealing the latest innovations shaping the flexible, innovative, and secure business environments of the future. [all times listed in PDT]
Microsoft 365 keynote: “Thriving in the era of AI”
Presenters: Omar Shahine (CVP), Adam Harmetz (VP), Karuana Gatimu (Principal PM Manager), and Dan Holme (Principal GPM)
Date/Time/Location: Wednesday, June 5th, 8:30am – 9:40am PDT | Room: 6E
Power Platform keynote: “Empowering transformation: Power Platform and Dataverse in the age of AI”
Presenter: Nirav Shah (CVP)
Date/Time/Location: Thursday, June 6th, 8:30am – 9:40am PDT | Room: 6E
Microsoft AMA + SharePint: Wednesday, June 3-7, 5:00pm – 7:00pm PDT | Room 6C – Collab Stage
Register today | Note: Use the MSCMTY discount code to save $200 USD off registration.
Take the opportunity to select the sessions best suited for your role and interests. All breakouts bring product updates, demos, customer stories, best practices, and insights into product and solution strategy – including guidance on the future.
And find us in the Community Lounge – A place to connect with Microsoft MVPs, MCM, Microsoft Regional Directors, and user group leaders via Ask the Experts tables and in the Community Lounge when you can pick up some laptop stickers and learn more about community programs in the Exhibit Hall.
TechCon365 (Microsoft 365) | Microsoft-led general and breakout sessions
It is crucial to ensure your organization is technically ready for the full potential of Copilot for Microsoft 365. The sessions below focus on technical readiness and ensuring you have the latest guidance. Our experts will share best practices and provide guidance on how to leverage AI and to maximize the benefits of Copilot within your organization.
TechCon365 general sessions
“Creating an AI-powered organization – User satisfaction & adoption practices for Copilot” with Karuana Gatimu | Room 609
“Getting ready for Copilot for Microsoft 365” | with Karuana Gatimu | Room 615:616
“SharePoint Premium – Intelligent content for everyone” with Sesha Mani, Chris McNulty, and Jaclynn Hiranaka | Room 608
“What’s new and next for Microsoft Viva” with Michael Holste and Kristi Kelly | Room 619:620
TechCon365 breakout sessions + workshop
“Copilot to Enhance the Employee Experience” with Jay Leask | Room 604
“The art of prompt engineering in Copilot for Microsoft 365” with Michelle Gilbert | Room 613:614
“Driving rollout & adoption of Microsoft 365 and Copilot with Microsoft Viva” with Heather Cook and Karuana Gatimu | Room 608
“The Future of Your Intranet: Beautiful, flexible and AI-ready powered by SharePoint” Denise Trabona and Dave Cohen | Room 619:620
“Introducing SharePoint Premium: AI-powered content management for Microsoft 365” with Chris McNulty | Room 615:616
“Unlock SharePoint Premium content services by connecting Azure Pay-as-you-go billing” with Tom Resing | Room 612
“Automatically capture information about incoming files in Microsoft 365” with Tom Resing | Room 612
“The Ins and Outs of Microsoft 365 Backup & Archiving” with Trent Green, Brad Gussin, and Jaclynn Hiranaka | Room 608
“Teams Premium unveiled: Navigating Teams Premium for optimal productivity” with Margi Desai and Mansoor Malik | Room 619:620
“Empowering frontline workers with Microsoft Teams and next-generation AI” with Tulsi Keshkamat | Room 615:616
“Microsoft Teams in a regulated environment” with Max Fritz | Room 602:603
“What’s new in Teams for Education” with Max Fritz | Room 607
“Cultivating trust and leadership excellence: Strategies for respect and empathy in the workplace” with Heather Cook | Room 613:614
“Getting started with Viva Amplify” with Michael Holste and Naomi Moneypenny | Room 608
“Viva Underground: An outcome-based route to success with Microsoft Viva” with Joy Apple and Jay Leask | Room 615:616
“OneDrive: Collaboration and AI at your fingertips” with Ben Truelove | Room 619:620
“New Planner: Unifying task management in Microsoft Teams” with Biatrice Ambrosa | Room 609
“Mastering Microsoft Lists” with Miceile Barrett and Mark Kashman | Room 619:620
“How Microsoft Does IT: Governance and Administration in the Era of Copilot” | Room 615:616
“Managing change in a Microsoft world! Office 365 governance and change management” with Max Fritz and Michelle Gilbert | Room 612
“Top 10 best practices every admin should be doing in Microsoft 365” with Michelle Gilbert | Room 607
“Governance, Information Management, and Teams – What you need to know” with Joy Apple and Jay Leask | Room 606
“Secure collaboration in Microsoft 365 within a zero-trust lens” with Jay Leask | Room 613:614
WORKSHOP | “Ultimate guide to administering Microsoft 365 and Teams” with Max Fritz and Michelle Gilbert | Room 609
TechCon365 developer sessions
“Introduction to extending Copilot for Microsoft 365” with Jeremy Thake | Room 604
“Developing Graph Connector to ground your business data in Copilot for Microsoft 365” with Jeremy Thake | Room 608
“Copilot extensibility with Microsoft Graph Connectors made easy” with Fabian Williams | Room 608
“Introduction to Microsoft Graph” with Fabian Williams | Room 604
“Building Copilot experiences in SharePoint Embedded applications” with Marc Windle | Room 608
“Improve your users’ productivity with custom Viva Connections cards” with Alex Terentiev | Room 607
“Expanding SharePoint Framework Web Parts in Teams, Office and Outlook” with Alex Terentiev | Room 606
“Viva Connections: Create bot-powered adaptive card extensions” with Alex Terentiev | Room 602:603
PWRCON (Power Platform & Microsoft Fabric) | Microsoft-led sessions
Discover more AI innovation and learn about other core investments that help us deliver powerful business applications for your organization. Power Platform and Fabric help you leap ahead in the Age of AI. From keynote to breakouts to workshops, PWRCON provides insights on how the Power Platform, Dataverse, and Fabric leverage existing enterprise data and business processes to unlock the benefits of Copilot. Get up to speed on the latest product updates and turn up your skills dial on real-world solution design and deployment. Drive your digital transformation, learning from the best subject matter experts in the business.
PWRCON general sessions
“Power Automate and automation in the Age of AI: strategy & roadmap” with Ashvini Sharma | Room 619:620
“Power Platform Architecture” with Ilya Grebnov | Room 615:616
“What’s new in Dataverse & AI Builder: How to easily build generative AI business applications” with Yogi Naik | Room 612
“Building the apps of the future today with Power Platform and Copilot” with Leon Welicki | Room 608
PWRCON breakout sessions + workshop
“Copilot is beside me along my RPA journey” with Taiki Yoshida and Chris Garty | Room 615:616
“What’s New with Copilot Studio” with Dewain Robinson and Pawan Taparia | Room 615:616
“Extending Microsoft Copilot products using Copilot Studio” with Dewain Robinson and Pawan Taparia | Room 609
“Deep dive into building Copilots with Copilot Studio” with Dewain Robinson and Pawan Taparia | Room 606
“Extend Copilot Studio with intelligent actions, workflows from Power Automate” with Matt Townsend and Harysh Menon | Room 619:620
“Extend Copilot for Sales using Copilot Studio to empower sales teams with data and insights” with Bharath Varadarajan | Room 609
“Process mining with Copilot and AI: A new frontier for business intelligence” with Heather Orta-Olmo and Derah Onuorah | Room 615:616
“Dataverse: Safeguard AI-enabled Enterprise Applications and Copilots” with Mihaela Blendea | Room 619:620
“Power Pages overview and roadmap” with Meera Mahabala | Room 619:620
“Using your enterprise knowledge for building Q&A experiences in Copilot” with Julie Koesmarno | Room 604
“Securing and governing the Power Platform at scale” with Zohar Raz | Room 609
Microsoft Fabric and Power BI sessions + workshop
“Unlocking insights with Power BI Copilot” with Shannon Lindsay and Alex Powers | Room 609
“Building a modern Data Lake with OneLake: The OneDrive for data” with Josh Caplan | Room 611
“Driving productivity and a data-driven culture with Power BI in Microsoft 365” with Alex Powers and Shannon Lindsay | Room 619:620
“Transform Your Power BI data in Microsoft Fabric” with John White and Jason Himmelstein | Room 611
“Source Control with Power BI and Microsoft Fabric” with John White and Jason Himmelstein | Room 609
“Deep Dive on Power BI, Teams and SharePoint” with John White and Jason Himmelstein | Room 609
“From SQL developer to business analyst: Harnessing Fabric’s innovations” with Charles Webb | Room 612
WORKSHOP | “Everything You Wanted to Know About Power BI… but were afraid to ask!” with John White and Jason Himmelstein | Room 607
Register today | Note: Use the MSCMTY discount code to save $200 USD off registration.
Get the most out of TechCon365: Our top five tips while attending
Introduce yourself | Unique perspectives await, including yours.
Attend as much as you can | Laptops down, eyes open – depth learning, tips, and tricks abound.
Share what you know |Your knowledge saves time – pay it forward.
Ask questions, share feedback | Your issues and ideas Inform us and influence the roadmap.
Hydrate and dress for steps | Keep the brain healthy and mind active.
BONUS | Update your LinkedIn profile and photo | Best reflect your professional experience and growing technical aptitude.
Learn more
Visit TechCon365.com/Seattle and follow the action on X/Twitter: @TechCon365, @Microsoft365, @MSFTCopilot, @SharePoint, @OneDrive, @MicrosoftTeams, @MSPowerPlat, @Microsoft365Dev, and @MSFTAdoption.
I hope you will join us in Seattle, WA for what will be a fantastic week in the PNW! We’re looking forward to the action alongside the community, MVPs, and Microsoft product members from Copilot, Teams, Office, SharePoint, OneDrive, Loop, Viva, Power Platform, Lists, Planner, and more.
Remember, use the discount code MSCMTY discount code to save $200 USD off your conference registration. Register today!
Last, a glimpse of the TechCon365/PWRCON event experience:
Cheers and see you there,
Mark Kashman, Senior product manager – Microsoft
Microsoft Tech Community – Latest Blogs –Read More
Windows Server 2012 manual patching
Hello Team,
I have 2 new servers – Windows Server 2016 standard and Windows 2012 R2 standard.
I need to install security patches manualy(download from interenet, copy and install) as there is no access to internet and we don’t have any patching tool.
For Windows 2016 standard I will install latest Cumulative Update and the latest Service Stack Update. I think it is enough.
But what about Windows Server 2012 R2 standard? Which security patches should I install to have this server up-to-date?
Thank you in advance for help.
Hello Team,I have 2 new servers – Windows Server 2016 standard and Windows 2012 R2 standard.I need to install security patches manualy(download from interenet, copy and install) as there is no access to internet and we don’t have any patching tool.For Windows 2016 standard I will install latest Cumulative Update and the latest Service Stack Update. I think it is enough.But what about Windows Server 2012 R2 standard? Which security patches should I install to have this server up-to-date?Thank you in advance for help. Read More
Copilot for 3rd party system – Advice needed
I am currently working on creating a Copilot intended to be used as a tool for employees to access and retrieve information about customers and the insurances they have in a 3rd party, non-Microsoft, system.
I’m struggling with finding information about some functionalities and best practices and would greatly appreciate your advice:
The insurances, customer, and claims are queryable via an API and events on a service bus upon changes – we do not have access to the databaseThe insurances need to be correlated with the corresponding terms & conditions, which are available in PDFs in a blob-store or Sharepoint.Depending on if it is a customer, or a internal administrator, only the relevant insurances/claims-data should be part of the dataset included in the responseIf an insurance is created for a customer, it should be part of the dataset “near realtime”.
A quick response time is crucial, which means pre-indexing data is a necessity.
Ideally, the Copilot should operate swiftly and accurately, but I am also tasked with creating a solution that is easy to set up and maintain. We’re deciding between using Copilot and AI Studio.
What would be the easiest way to implement this, and what would be the best way?
Thank you,
Malin
I am currently working on creating a Copilot intended to be used as a tool for employees to access and retrieve information about customers and the insurances they have in a 3rd party, non-Microsoft, system. I’m struggling with finding information about some functionalities and best practices and would greatly appreciate your advice: The insurances, customer, and claims are queryable via an API and events on a service bus upon changes – we do not have access to the databaseThe insurances need to be correlated with the corresponding terms & conditions, which are available in PDFs in a blob-store or Sharepoint.Depending on if it is a customer, or a internal administrator, only the relevant insurances/claims-data should be part of the dataset included in the responseIf an insurance is created for a customer, it should be part of the dataset “near realtime”.A quick response time is crucial, which means pre-indexing data is a necessity. Ideally, the Copilot should operate swiftly and accurately, but I am also tasked with creating a solution that is easy to set up and maintain. We’re deciding between using Copilot and AI Studio.What would be the easiest way to implement this, and what would be the best way?Thank you,Malin Read More
Unable to fetch more than 5000 records from filtered view
I have a SharePoint List and created filtered views of the list which contains more than 5000 records . I tried to retrieve the records and getting an error like “The attempted operation is prohibited because it exceeds the list view threshold”. Can anyone help me to get data using pagination like batch by batch in loop? If possible share the code snippet for python.
I have a SharePoint List and created filtered views of the list which contains more than 5000 records . I tried to retrieve the records and getting an error like “The attempted operation is prohibited because it exceeds the list view threshold”. Can anyone help me to get data using pagination like batch by batch in loop? If possible share the code snippet for python. Read More
How to change SP online site domain
Hi,
I wanted to change SP online domain url from https://contoso.sharepoint.com to https://contoso.com
please suggest if need any higher license or anyhow possible?
Thanks,
Deepak
Hi,I wanted to change SP online domain url from https://contoso.sharepoint.com to https://contoso.complease suggest if need any higher license or anyhow possible? Thanks,Deepak Read More
Different meeting stage for host and guest
Environment: teams web and desktop (new version 1415/24031414721), asp core 6, third party cookies
host = user that starts teams app, logs in (authenticates) and invokes shareAppContentToStage
guest = all others users
1. manifest loads sidebar via manifest config (for example configurationURL: https://test.de/sidebar)
2. sidebar contains Login button with authenticates against https://test.de/login
3. sucessfull Login (cookie authentication) in iFrame => sidebar will redirected to https://test.de/host
4. javascript of host calls “microsoftTeams.meeting.shareAppContentToStage(handleShareScreenAction, `https://test.de/guestorhost`);”
5. endpoint /guestorhost checks if request is authenticated (in this example only user with sidebar logged in)
5.1 If request authenticated => redirect to ‘https://test.de/hoststage‘
5.2 If request NOT authenticated => redirect to ‘https://test.de/gueststage‘
Expected result: host should always get /hoststage, guest should always get /gueststage
Current result: sometimes it is working probably, sometimes host gets /gueststage or nothing
My guess is that third party cookies is not working stable and sometimes they are send and sometimes not.
Environment: teams web and desktop (new version 1415/24031414721), asp core 6, third party cookies host = user that starts teams app, logs in (authenticates) and invokes shareAppContentToStageguest = all others users 1. manifest loads sidebar via manifest config (for example configurationURL: https://test.de/sidebar)2. sidebar contains Login button with authenticates against https://test.de/login3. sucessfull Login (cookie authentication) in iFrame => sidebar will redirected to https://test.de/host4. javascript of host calls “microsoftTeams.meeting.shareAppContentToStage(handleShareScreenAction, `https://test.de/guestorhost`);”5. endpoint /guestorhost checks if request is authenticated (in this example only user with sidebar logged in) 5.1 If request authenticated => redirect to ‘https://test.de/hoststage’ 5.2 If request NOT authenticated => redirect to ‘https://test.de/gueststage’Expected result: host should always get /hoststage, guest should always get /gueststageCurrent result: sometimes it is working probably, sometimes host gets /gueststage or nothing My guess is that third party cookies is not working stable and sometimes they are send and sometimes not. Read More
Microsoft Store latest changes with app downloads
Just wondering if any other IT admins are dealing with the recent changes to the Microsoft Store app downloads and how instead of launching the full MS Store, it now launches a containerized version. While this does lead to a better user experience, the unfortunate side effect is it bypasses our current restriction to block the MS Store from users.
We currently control all our MS Store apps in Intune via the Company Portal and have the policy to “block MS Store” enabled. However, the change where you can download the apps via a self contained EXE on the website now bypasses this block, presumably because the containerized version of the app installer is not referencing any of these policies.
Can Microsoft please address this? We don’t really want to block apps.microsoft.com but if this behavior isn’t changed this might be the end result.
Just wondering if any other IT admins are dealing with the recent changes to the Microsoft Store app downloads and how instead of launching the full MS Store, it now launches a containerized version. While this does lead to a better user experience, the unfortunate side effect is it bypasses our current restriction to block the MS Store from users. We currently control all our MS Store apps in Intune via the Company Portal and have the policy to “block MS Store” enabled. However, the change where you can download the apps via a self contained EXE on the website now bypasses this block, presumably because the containerized version of the app installer is not referencing any of these policies. Can Microsoft please address this? We don’t really want to block apps.microsoft.com but if this behavior isn’t changed this might be the end result. Read More
Why comments are not imported into Planner from Trello with apps4.Pro?
Hello, I am using the apps4.pro tool to transfer content from my team’s Trello to a new Planner plan. Every time I do this with the administrator, the comments that were made on Trello are not imported into Planner even though the tool’s documentation confirms that the comments will also be transferred. We exported the Trello cards with all their content into a JSON file which is normally supported by apps4.Pro. In the JSON file, comments are correctly saved in the JSON file exported from Trello and are accessible via a key named “text”. Is there a particular format for Planner to recognize and import comments?
Hello, I am using the apps4.pro tool to transfer content from my team’s Trello to a new Planner plan. Every time I do this with the administrator, the comments that were made on Trello are not imported into Planner even though the tool’s documentation confirms that the comments will also be transferred. We exported the Trello cards with all their content into a JSON file which is normally supported by apps4.Pro. In the JSON file, comments are correctly saved in the JSON file exported from Trello and are accessible via a key named “text”. Is there a particular format for Planner to recognize and import comments? Read More
Decomissioning a single not anymore used Exchange Server 2013
About 1,5 years ago we moved all the email database to MS365 and since then used online the cloud solution. This was and is not a hybrid connection between the on-premises Exchange server and MS365. Now I want to delete all mailboxes of on-premises server and uninstall it.
I am looking at this tutorial in this relation: https://techcommunity.microsoft.com/t5/exchange-team-blog/decommissioning-exchange-server-2013/ba-p/3613793. With some differences (e.g. disabling and deleting mailboxes instead of migrating them) this seems to be a good way. But another tutorial suggests that a single server should be left for management (though this tutorial considers a hybrid installation with 2016/2019 EX-srv): https://www.alitajran.com/remove-last-exchange-server/
My question is: How do I remove (or at least minimize) a no-more used Exchange Server 2013 installation und Windows Server 2012 R2.
Thank you in advance.
About 1,5 years ago we moved all the email database to MS365 and since then used online the cloud solution. This was and is not a hybrid connection between the on-premises Exchange server and MS365. Now I want to delete all mailboxes of on-premises server and uninstall it. I am looking at this tutorial in this relation: https://techcommunity.microsoft.com/t5/exchange-team-blog/decommissioning-exchange-server-2013/ba-p/3613793. With some differences (e.g. disabling and deleting mailboxes instead of migrating them) this seems to be a good way. But another tutorial suggests that a single server should be left for management (though this tutorial considers a hybrid installation with 2016/2019 EX-srv): https://www.alitajran.com/remove-last-exchange-server/My question is: How do I remove (or at least minimize) a no-more used Exchange Server 2013 installation und Windows Server 2012 R2.Thank you in advance. Read More
MCP Certification Transcript not Found on my MCID
Hello,
I received MCP in 2006 but i have not logged in to the platform for many years. I was able to recover the account could using my MCID but despite merging the account to with Microsoft learn several days ago, i could not find my MCP SQL transcipt anywhere.
Is there a way to retrieve a copy of my transcript?
Thank you for your help
Hello,I received MCP in 2006 but i have not logged in to the platform for many years. I was able to recover the account could using my MCID but despite merging the account to with Microsoft learn several days ago, i could not find my MCP SQL transcipt anywhere. Is there a way to retrieve a copy of my transcript? Thank you for your help Read More
Power Query only returning 500,000 rows of data into excel
I have a Power Query that connects to an Azure Log Analytics workspace pulling back data which I then populate an excel spreadsheet with, and generate graphs and pivot tables.
I have just noticed that the records returned into Excel caps out at 500,000 records, and I know that there are more than 500,000 records.
Is there a limit? I can’t figure out if it’s my query, or something else.
I have a Power Query that connects to an Azure Log Analytics workspace pulling back data which I then populate an excel spreadsheet with, and generate graphs and pivot tables. I have just noticed that the records returned into Excel caps out at 500,000 records, and I know that there are more than 500,000 records. Is there a limit? I can’t figure out if it’s my query, or something else. Read More
Improving RAG performance with Azure AI Search and Azure AI prompt flow in Azure AI Studio
Content authored by: Arpita Parmar
Introduction
If you’ve been delving into the potential of large language models (LLMs) for search and retrieval tasks, you’ve probably encountered Retrieval Augmented Generation (RAG) as a valuable technique. RAG enriches LLM-generated responses by integrating relevant contextual information, particularly when connected to private data sources. This integration empowers the model to deliver more accurate and contextually rich responses.
Challenges with RAG evaluation
Evaluating RAG poses several challenges, requiring a multifaceted approach. Evaluating both response quality and retrieval effectiveness are in ensuring optimal performance.
Traditional evaluation metrics for RAG applications, while useful, have certain limitations that can impact their effectiveness in accurately assessing RAG performance. Some of these limitations include:
Inability to fully capture user intent: Traditional evaluation metrics often focus on lexical and semantic aspects but may not fully capture the underlying user intent behind a query. This can result in a disconnect between the metrics used to evaluate RAG performance and the actual user experience.
Reliance on ground truth: Many traditional evaluation metrics rely on the availability of a pre-defined ground truth to compare system-generated responses against. However, establishing ground truth can be challenging, particularly for complex queries or those with multiple valid answers. This can limit the applicability of these metrics in certain scenarios.
Limited applicability across different query types: Traditional evaluation metrics may not be equally effective across different query types, such as fact-seeking, concept-seeking, or keyword queries. This can result in an incomplete or skewed assessment of RAG performance, particularly when dealing with diverse query types.
Overall, while traditional evaluation metrics offer valuable insights into RAG performance, they are not without their limitations. Incorporating user feedback into the evaluation process adds another layer of insight, bridging the gap between quantitative metrics and qualitative user experiences. Therefore, adopting a multifaceted approach that considers retrieval quality, relevance of response to retrieval, user intent, ground truth availability, query type diversity, and user feedback is essential for a comprehensive and accurate evaluation of RAG systems.
Improving RAG Application’s Retrieval with Azure AI Search
When evaluating RAG applications, it is crucial to accurately assess retrieval effectiveness and to tune relevance of retrieval. Since the retrieved data is key for the successful implementation of the RAG pattern, a retrieval system that can significantly enhance the quality of your results is integration of Azure AI Search. Even if AI Search offers keyword (full-text), vector and hybrid search capabilities, this post will be focused on using hybrid search. The hybrid search approach can be particularly beneficial in scenarios where the retrieval performance is varied or insufficient. By integrating both keyword and vector-based search techniques, hybrid search can improve the accuracy and completeness of the retrieved documents, which in turn can positively impact the relevance of the generated responses.
The hybrid search process in Azure AI Seach involves the following steps:
Keyword search: Initial keyword index search to find documents containing the query terms using BM25 ranking algorithm.
Vector search: In parallel, vector search uses dense vector representations to map the query to semantically similar documents, leveraging embeddings in vector fields using Hierarchical Navigable Small World (HNSW) or exhaustive k-nearest neighbors (KNN) algorithm.
Result merging: The results from both keyword and vector searches are merged using a Reciprocal Rank Fusion (RRF) algorithm.
Enhancing Retrieval: Quality of retrieval & relevance tuning
When turning for retrieval relevance and quality of retrieval, there are several strategies to consider:
Document processing: Experiment with chunk size and overlap to preserve context or continuity between chunks.
Document understanding: Embeddings play a pivotal role in enabling pipelines to understand documents in relation to user queries. By transforming documents and queries into dense vector representations, embeddings facilitate the measurement of semantic similarity between them. Consider selecting an appropriate embedding model. For example, higher-dimensional embeddings can store more context information but may require more computational resources, while smaller-dimensional embeddings are more efficient but may sacrifice some context.
Vector search configuration: Think of this configuration like building a map. Adjusting this parameter helps the algorithm decide how many landmarks to use and how far apart they should be, which can affect how quickly and accurately it finds relevant information. Adjust the efConstruction parameter for HNSW to change the internal composition of the proximity graph. This parameter changes the way the search algorithm organizes information internally.
Query-time parameter: Increase the number of results (k) to feed more search results. It determines how many search results are returned for each query. Increasing k means the system will provide more potential matches, which can be useful if you’re trying to find the best answer among many possibilities.
Enhancing hybrid search with Semantic re-ranking: To further enhance the quality of search results, a semantic re-ranking step can be added. Also known as L2, this layer takes a subset of the top L1 results and computes higher-quality relevance scores to reorder the result set. The L2 ranker can significantly improve the ranking of results already found by the L1, critical for RAG applications to ensure the best results are in the top positions. In Azure Search, this is done using a semantic ranker developed in partnership with Bing, which leverages vast amounts of data and machine learning expertise. The re-ranking step helps optimize relevance by ensuring that the most related documents are presented at the top of the list.
By unifying these retrieval techniques and configurations, hybrid search can handle queries more effectively compared to using just keywords or vectors alone. It excels at finding relevant documents even when users query with concepts, abbreviations or phraseology different from the documents.
A recent Microsoft study highlights that hybrid search with semantic re-ranking outperforms traditional vector search methods like dense and sparse passage retrieval across diverse question-answering tasks.
According to this study, key advantages with hybrid search with semantic re-ranking include:
Higher answer recall: Returning higher quality answers more often across varied question types.
Broader query coverage: Handling abbreviations, rare terms that vector search struggle with.
Increased precision: Merged results combining keyword statistics and semantic relevance signals.
Now that we’ve covered retrieval tuning, let’s turn our attention to evaluating generation and streamlining the RAG pipeline evaluation process. Azure AI prompt flow offers comprehensive framework to streamline RAG evaluation.
Azure AI prompt flow
Prompt flow streamlines RAG evaluation with multifaceted approach by efficiently comparing prompt variations, integrating user feedback, and supporting both traditional and AI-generated metrics that don’t require ground truth data. It ensures tailored responses for diverse queries, simplifying retrieval and response evaluation while providing comprehensive insights for improved RAG performance.
Both Azure AI Search and Azure AI prompt flow are available in Azure AI Studio, a unified platform for responsibly developing and deploying generative AI applications. The one-stop-shop platform enables developers to explore the latest APIs and models, access comprehensive tooling to support the generative AI development lifecycle, design applications responsibly, and deploy and scale models, flows and apps at scale with continuous monitoring.
With Azure AI Search, developers can connect models to their protected data for advanced fine-tuning and contextually relevant retrieval augmented generation. With Azure AI prompt flow, developers can orchestrate AI workflows with prompt orchestration, interactive visual flows, and code-first experiences to build sophisticated and customized enterprise chat applications.
Here is a video of how to build and deploy an enterprise chat application with Azure AI Studio.
Evaluating RAG applications in prompt flow revolves around three key aspects:
Prompt variations: Prompt variation testing, informed by user feedback, ensures tailored responses for diverse queries, enhancing user intent understanding and addressing various query types effectively.
Retrieval evaluation: This involves assessing the accuracy and relevance of the retrieved documents.
Response evaluation: The focus is on measuring the appropriateness of the LLM-generated response when provided with the context.
Below is the table of evaluation metrics for RAG applications in Prompt flow.
Metric Type
AI Assisted/Ground Truth Based
Metric
Description
Generation
AI Assisted
Groundedness
Measures how well the model’s generated answers align with information from the source data (user-defined context).
Generation
AI Assisted
Relevance
Measures the extent to which the model’s responses generated are pertinent and directly related to the given questions.
Retrieval
AI Assisted
Retrieval Score
Measures the extent to which the model’s retrieved documents are pertinent and directly related to the given questions.
Generation
Ground Truth Based
Accuracy, Precision, Recall, F1 score
Measures the RAG system’s responses to a set of predefined, correct answers. Measures the ratio of the number of shared words between the model generation and the ground truth answers.
There are 3 AI assisted metrics available in prompt flow that do not require ground truth. Traditional metrics based on ground truth are useful while testing RAG applications in development, but AI-assisted metrics offer enhanced capabilities for evaluating user responses, especially in situations where ground truth data is unavailable. These metrics provide valuable insights into the performance of the RAG Application in real-world scenarios, enabling more comprehensive assessment of user interactions and system behavior. These are those metrics:
Groundedness: Groundedness ensures that the responses from the LLM align with the context provided and are verifiable against the available sources. It confirms factual accuracy and ensures that the conversation remains grounded when all responses meet this criterion.
Relevance: Relevance measures the appropriateness of the generated answers to the user’s query based on the retrieved documents. It assesses whether the response provides sufficient information to address the question and adjusts the score accordingly if the answer lacks relevance or contains unnecessary details.
Retrieval Score: The retrieval score reflects the quality and relevance of the retrieved documents to the user’s query. It breaks down the user query into intents, assesses the presence of relevant information in the retrieved documents, and calculates the fraction of intents with affirmative responses to determine relevance.
Groundedness, relevance, and the retrieval score along with prompt variant testing from prompt flow collectively provide insights into the performance of RAG applications. It enables refinement of RAG Applications, addressing challenges associated with information overload, incorrect response, insufficient retrieval and ensuring more accurate responses throughout the end-to-end evaluation process.
Potential scenarios to evaluate RAG workflows
Now, let’s explore 3 potential scenarios to evaluate RAG workflows and how prompt flow and Azure AI Search help in evaluating those scenarios.
Scenario 1: Successful Retrieval and Response
This scenario entails the seamless integration of relevant contextual information with accurate and appropriate responses generated by RAG application. We have good response and good retrieval.
In this scenario, all three metrics perform optimally. Groundedness ensures factual accuracy and verifiability, relevance ensures the appropriateness of the answer to the query, and the retrieval score reflects the quality and relevance of the retrieved documents.
Scenario 2: Inaccurate Response, Insufficient Retrieval
Here, despite the retrieval of relevant documents, the response from LLM is inaccurate. Groundedness may suffer if the response lacks verifiability against the provided sources. Relevance may also be compromised if the response does not adequately address the user’s query. The retrieval score might indicate successful document retrieval but fails to capture the inadequacy of the response.
To address this challenge, Azure AI Search retrieval tuning can be leveraged to enhance the retrieval process, ensuring that the most relevant and accurate documents are retrieved. By fine-tuning the search parameters discussed above in section “Enhancing Retrieval: Quality of retrieval & relevance tuning,” Azure AI Search can significantly improve the retrieval score, thereby increasing the likelihood of obtaining relevant documents for the given query.
Additionally, you can refine the LLM’s prompt by incorporating a conditional statement within the prompt template, such as “if relevant content is unavailable and no conclusive solution is found, respond with ‘unknown’.” Leveraging prompt flow, which allows for the evaluation and comparison of different prompt variations, you can assess the merit of various prompts and select the most effective one for handling such situations. This approach ensures accuracy and honesty in the model’s responses, acknowledging its limitations and avoiding the dissemination of inaccurate information.
Scenario 3: Incorrect Response, Varied Retrieval Performance
In this scenario, the retrieval of relevant documents is followed by an inaccurate response from the LLM. Groundedness may be maintained if the responses remain verifiable against the provided sources. However, relevance is compromised as the response fails to address the user’s query accurately. The retrieval score might indicate successful document retrieval, but the flawed response highlights the limitations of the LLM.
Evaluation in this scenario involves several key steps facilitated by Azure AI prompt flow and Azure AI Search:
Acquiring Relevant Context: Embedding a user query to search a vector database for pertinent chunks is crucial. The success of retrieval relies on the semantic similarity of these chunks to the query and their ability to provide relevant information for generating accurate responses (see section “Enhancing Retrieval: Quality of retrieval & Relevance Tuning”).
Optimizing Parameters: Adjusting parameters such as retrieval type (hybrid, vector, keyword), chunk size, and K value is necessary to enhance RAG application performance. (see section “Enhancing Retrieval: Quality of retrieval & Relevance Tuning”).
Prompt Variants: Utilizing prompt flow, developers can test and compare various prompt variations to optimize response quality. By iterating prompt templates and LLM selections, prompt flow enables rapid experimentation and refinement of prompts, ensuring that the retrieved content is effectively utilized to produce accurate responses. (see section “How to evaluate RAG with Azure Machine Learning prompt flow”).
Refining Response Generation Strategies: Moreover, exploring different text extraction techniques and embedding models alongside experimenting with chunking strategies can further improve overall RAG performance. (see section “Enhancing Retrieval: Quality of retrieval & Relevance Tuning”).
How to evaluate RAG with Azure AI prompt flow
In this section, let’s walk through the step-by-step process of testing RAG using prompt variants with the prompt flow using metrics such as groundedness, relevance, and retrieval score.
Prerequisite: Build RAG using Azure Machine Learning prompt flow.
1. Prepare Test Data: Ideally, you should prepare a test dataset of 50-100 samples but for this article we will prepare a test dataset with a few samples. Save this as a csv file.
2. Add test data to Azure AI Studio: In your AI Studio project, under Components, select Data -> New data.
3. Select Upload files/folders and upload the test data from a local drive. Click on Next, provide a name to your data location and click on Create.
4. Once the test data is uploaded you can see its details.
5. Evaluate the flow: Under Tools -> Evaluation, click on New evaluation. Choose Conversation with context and select a flow you want to evaluate. Here we are testing two variants of prompt: Variant_0 and Variant_1. Click on Next.
6. Configure the test data. Click on Next.
7. Under Select Metrics RAG metrics are automatically selected based on the scenario you have chosen. Refer to more details of metrics. Choose your Azure OpenAI Service instance and model and click on Next.
8. Review and finish. Click on Submit.
9. Once the evaluation is complete it will be displayed under Evaluations.
10. Check the results by clicking on the evaluation. You can compare the two variants of prompts by comparing their metrics to see which prompt variant is performing better.
11. You can check the result of individual prompt variant evaluation metrics under the Output tab -> Metrics dashboard.
12. Also, under the Output tab, you can also see a detailed view of the metrics under Detailed metrics result.
13. Under the Trace tab, you can trace how many tokens were generated and the duration for each test question.
Conclusion:
The integration of Azure AI Search into the RAG pipeline can significantly improve retrieval scores. This enhancement ensures that retrieved documents are more aligned with user queries, thus enriching the foundation upon which responses are generated. Furthermore, by integrating Azure AI Search and Azure AI prompt flow in Azure AI Studio, developers can test and optimize response generation to improve groundedness and relevance. This approach not only elevates RAG application performance but also fosters more accurate, contextually relevant, and user-centric responses.
Microsoft Tech Community – Latest Blogs –Read More
8.5.2024 Copilot Business Case Builder – kuinka laskea hyödyt auki
Copilot Business Case Builder – kuinka laskea liiketoimintahyödyt asiakkaan johtoryhmälle -webinaari 8.5.2024?
Webinaari 8.5.2024 klo 9-10.
Rekisteröidy tästä linkistä.
Kaipaako asiakkaasi jotain muuta perustelua kuin ajansäästöt? Tuntuuko, että asiakkaasi jarruttelevat investointipäätöksissä?
Ainutlaatuinen mahdollisuus jokaiselle tulla kuulemaan parhaat vinkit Copilot for Microsoft 365 -ratkaisun myyntiin. Tämän kertaisessa webinaarissa annamme avaimet myynnin haasteisiin, kun Microsoftin Business Case Builder guru Benny van Well tulee kertomaan teille uudesta tavasta laskea Copilotin arvo ja hyödyt asiakkaalle.
Asiakkaat (ja sinä) tarvitsevat enemmän kuin aikasäästöjä. Webinaarissa Benny kertoo miksi ja miten liiketoimintahyödyt lasketaan auki, jotta asiakkaan johtoryhmä pystyy tekemään investointipäätöksen. Copilot keskusteluthan pitäisi aina viedä johtoryhmälle eikä asiakkaan IT-funktioon.
Tämän webinaarin päätteeksi kaikki osallistujat ovat saaneet tarvittavat valmiudet ja ymmärryksen siitä, kuinka Copilotin investointipäätös voidaan perustella asiakkaalle.
Tutustu ennen webinaaria tähän materiaaliin: Microsoft Business Case Builder
Tämä webinaari pidetään poikkeuksellisesti englanniksi!
Tallenne on katsottavissa jälkikäteen samasta rekisteröitymislinkistä CLoud Championissa!
Copilot Business Case Builder – how to calculate business benefits
Register using this link.
Do your customers need any other justification than time savings? Does it feel like your customers are hesitating in investment decisions?
A unique opportunity for everyone to come and hear the best tips for selling the Copilot for Microsoft 365 solution. In this webinar, we will give you the keys to sales challenges when Benny van Well, the Microsoft Business Case Builder guru, will tell you about a new way to calculate the value and benefits of Copilot for the customer.
Customers (and you) need more than time savings. In the webinar, Benny will explain why and how business benefits are calculated, so that the customer’s executive team can make an investment decision. Copilot discussions should always be brought to the executive team and not to the customer’s IT function.
After this webinar, all participants will have the necessary readiness and understanding of how to justify the Copilot investment decision to the customer.
Please familiarize yourself with this material before the webinar: Microsoft Business Case Builder
This webinar will be held exceptionally in English!
Recording available afterwards in Cloud Champion using the registration link!
Microsoft Tech Community – Latest Blogs –Read More
Office Activations per user with devices specified
Hi,
Is it possible to get a report over all users who have activated Office including the name of the device (Windows, Apple, Android device name)?
I know that I can go on each user, one by one, to get that information, but having a report will be usefull when searching for a specific computer name
Thank you for your reply 🙂
Regards,
José
Hi, Is it possible to get a report over all users who have activated Office including the name of the device (Windows, Apple, Android device name)?I know that I can go on each user, one by one, to get that information, but having a report will be usefull when searching for a specific computer nameThank you for your reply 🙂 Regards, José Read More
Multiple conditions case
Working on excel workbook where I want to see a value(Column1) in Column 12 if value in Column 3= C1 and If value in Column 2 is >=C31 and if value in Column 2 is <=D3 . Above 3 conditions which are true only then I want to see value in Column 1 otherwise I want to see “NO” in Column 1.
I tried formula (IFS([@Column3]=’Shift Pattern’!C1,[@Column1],[@Column2]>’Shift Pattern’!$C$3,[@Column1],[@Column2]<‘Shift Pattern’!$D$3,[@Column1])
Above formula shows Column 1 value even if one of the above condition is true and ignore others.
Can you please tell me how I can apply above mentioned condition?
Either I am apply formula wrong or I am apply wrong formula. What is the case?
Working on excel workbook where I want to see a value(Column1) in Column 12 if value in Column 3= C1 and If value in Column 2 is >=C31 and if value in Column 2 is <=D3 . Above 3 conditions which are true only then I want to see value in Column 1 otherwise I want to see “NO” in Column 1. I tried formula (IFS([@Column3]=’Shift Pattern’!C1,[@Column1],[@Column2]>’Shift Pattern’!$C$3,[@Column1],[@Column2]<‘Shift Pattern’!$D$3,[@Column1]) Above formula shows Column 1 value even if one of the above condition is true and ignore others. Can you please tell me how I can apply above mentioned condition? Either I am apply formula wrong or I am apply wrong formula. What is the case? Read More
add additional horizontal line on graph
Hello,
I am wanting to add an additional horizontal line on this graph, it would be linear and at y=24.8. Seems a simple problem but cant figure out how to do it. I have attached my graph and table data.
Thanks
Hello,I am wanting to add an additional horizontal line on this graph, it would be linear and at y=24.8. Seems a simple problem but cant figure out how to do it. I have attached my graph and table data.Thanks Read More