Tag Archives: microsoft
Microsoft at TechCon365 and PWRCON – Seattle, WA (June 3-7, 2024)
“The thing I enjoyed most about the event was being around like-minded individuals discussing things that I deal with daily.”
– Previous TechCon365 attendee
What: TechCon365 & PWRCON – Seattle
Register today |Use the MSCMTY discount code to save $200 USD off registration.
Content: 2 Microsoft keynotes + 8 general sessions || 185+ overall sessions – 50 Microsoft-led sessions| 25+ full-day workshops
Microsoft is sending over 45+ product makers to present and engage.
Review all sessions + agenda view, workshops, and their full speaker lineup.
When & where: June 3-7, 2024
In-person: Seattle, WA – Seattle Convention Center
Twitter & hashtag: @TechCon365 | #TechCon365
Cost: $850 – $2,775 (Learn more about ticket pricing options)
At TechCon365 & PWRCON Seattle, a Microsoft 365 Conference & Power Platform Conference, the subject matter is divided into tracks and each session is designated for beginner, intermediate, advanced or expert. Tracks are offered for the following subjects: Microsoft 365 Apps, SharePoint, Azure / 365 Development, Microsoft Teams, Power Apps, Content Management, Power Users, Business Value, Implementation/Administration, Power Automate (Flow)/Workflow, Power BI – Business Intelligence, SharePoint Development, and more. Choose one complete learning track or mix and match based on what content best meets you and your organization’s current needs!
With 2 optional days of workshops and a 3-day conference, you can choose from over 130 sessions in multiple tracks and 25 workshops presented by Microsoft 365, SharePoint, Power Platform, Microsoft Teams, Viva, Azure, Copilot & AI’s top experts! Whether you are new to Microsoft 365, Power Platform and SharePoint or an experienced power user, admin or developer, TechCon365 has content designed to fit your experience level and area of interest.
See how the Microsoft 365, SharePoint Power Platform, Azure, and AI ecosystem is growing and evolving by speaking with technical experts from the local Microsoft field and diverse channels within the Microsoft Partner Network – all in our exhibit hall.
Microsoft keynotes, sessions, and workshops: Copilot/AI, SharePoint, OneDrive, Teams, Viva, Power Platform, D&I, and related technology
Microsoft keynotes and AMA
Hear from Microsoft leadership revealing the latest innovations shaping the flexible, innovative, and secure business environments of the future. [all times listed in PDT]
Microsoft 365 keynote: “Thriving in the era of AI”
Presenters: Omar Shahine (CVP), Adam Harmetz (VP), Karuana Gatimu (Principal PM Manager), and Dan Holme (Principal GPM)
Date/Time/Location: Wednesday, June 5th, 8:30am – 9:40am PDT | Room: 6E
Power Platform keynote: “Empowering transformation: Power Platform and Dataverse in the age of AI”
Presenter: Nirav Shah (CVP)
Date/Time/Location: Thursday, June 6th, 8:30am – 9:40am PDT | Room: 6E
Microsoft AMA + SharePint: Wednesday, June 3-7, 5:00pm – 7:00pm PDT | Room 6C – Collab Stage
Register today | Note: Use the MSCMTY discount code to save $200 USD off registration.
Take the opportunity to select the sessions best suited for your role and interests. All breakouts bring product updates, demos, customer stories, best practices, and insights into product and solution strategy – including guidance on the future.
And find us in the Community Lounge – A place to connect with Microsoft MVPs, MCM, Microsoft Regional Directors, and user group leaders via Ask the Experts tables and in the Community Lounge when you can pick up some laptop stickers and learn more about community programs in the Exhibit Hall.
TechCon365 (Microsoft 365) | Microsoft-led general and breakout sessions
It is crucial to ensure your organization is technically ready for the full potential of Copilot for Microsoft 365. The sessions below focus on technical readiness and ensuring you have the latest guidance. Our experts will share best practices and provide guidance on how to leverage AI and to maximize the benefits of Copilot within your organization.
TechCon365 general sessions
“Creating an AI-powered organization – User satisfaction & adoption practices for Copilot” with Karuana Gatimu | Room 609
“Getting ready for Copilot for Microsoft 365” | with Karuana Gatimu | Room 615:616
“SharePoint Premium – Intelligent content for everyone” with Sesha Mani, Chris McNulty, and Jaclynn Hiranaka | Room 608
“What’s new and next for Microsoft Viva” with Michael Holste and Kristi Kelly | Room 619:620
TechCon365 breakout sessions + workshop
“Copilot to Enhance the Employee Experience” with Jay Leask | Room 604
“The art of prompt engineering in Copilot for Microsoft 365” with Michelle Gilbert | Room 613:614
“Driving rollout & adoption of Microsoft 365 and Copilot with Microsoft Viva” with Heather Cook and Karuana Gatimu | Room 608
“The Future of Your Intranet: Beautiful, flexible and AI-ready powered by SharePoint” Denise Trabona and Dave Cohen | Room 619:620
“Introducing SharePoint Premium: AI-powered content management for Microsoft 365” with Chris McNulty | Room 615:616
“Unlock SharePoint Premium content services by connecting Azure Pay-as-you-go billing” with Tom Resing | Room 612
“Automatically capture information about incoming files in Microsoft 365” with Tom Resing | Room 612
“The Ins and Outs of Microsoft 365 Backup & Archiving” with Trent Green, Brad Gussin, and Jaclynn Hiranaka | Room 608
“Teams Premium unveiled: Navigating Teams Premium for optimal productivity” with Margi Desai and Mansoor Malik | Room 619:620
“Empowering frontline workers with Microsoft Teams and next-generation AI” with Tulsi Keshkamat | Room 615:616
“Microsoft Teams in a regulated environment” with Max Fritz | Room 602:603
“What’s new in Teams for Education” with Max Fritz | Room 607
“Cultivating trust and leadership excellence: Strategies for respect and empathy in the workplace” with Heather Cook | Room 613:614
“Getting started with Viva Amplify” with Michael Holste and Naomi Moneypenny | Room 608
“Viva Underground: An outcome-based route to success with Microsoft Viva” with Joy Apple and Jay Leask | Room 615:616
“OneDrive: Collaboration and AI at your fingertips” with Ben Truelove | Room 619:620
“New Planner: Unifying task management in Microsoft Teams” with Biatrice Ambrosa | Room 609
“Mastering Microsoft Lists” with Miceile Barrett and Mark Kashman | Room 619:620
“How Microsoft Does IT: Governance and Administration in the Era of Copilot” | Room 615:616
“Managing change in a Microsoft world! Office 365 governance and change management” with Max Fritz and Michelle Gilbert | Room 612
“Top 10 best practices every admin should be doing in Microsoft 365” with Michelle Gilbert | Room 607
“Governance, Information Management, and Teams – What you need to know” with Joy Apple and Jay Leask | Room 606
“Secure collaboration in Microsoft 365 within a zero-trust lens” with Jay Leask | Room 613:614
WORKSHOP | “Ultimate guide to administering Microsoft 365 and Teams” with Max Fritz and Michelle Gilbert | Room 609
TechCon365 developer sessions
“Introduction to extending Copilot for Microsoft 365” with Jeremy Thake | Room 604
“Developing Graph Connector to ground your business data in Copilot for Microsoft 365” with Jeremy Thake | Room 608
“Copilot extensibility with Microsoft Graph Connectors made easy” with Fabian Williams | Room 608
“Introduction to Microsoft Graph” with Fabian Williams | Room 604
“Building Copilot experiences in SharePoint Embedded applications” with Marc Windle | Room 608
“Improve your users’ productivity with custom Viva Connections cards” with Alex Terentiev | Room 607
“Expanding SharePoint Framework Web Parts in Teams, Office and Outlook” with Alex Terentiev | Room 606
“Viva Connections: Create bot-powered adaptive card extensions” with Alex Terentiev | Room 602:603
PWRCON (Power Platform & Microsoft Fabric) | Microsoft-led sessions
Discover more AI innovation and learn about other core investments that help us deliver powerful business applications for your organization. Power Platform and Fabric help you leap ahead in the Age of AI. From keynote to breakouts to workshops, PWRCON provides insights on how the Power Platform, Dataverse, and Fabric leverage existing enterprise data and business processes to unlock the benefits of Copilot. Get up to speed on the latest product updates and turn up your skills dial on real-world solution design and deployment. Drive your digital transformation, learning from the best subject matter experts in the business.
PWRCON general sessions
“Power Automate and automation in the Age of AI: strategy & roadmap” with Ashvini Sharma | Room 619:620
“Power Platform Architecture” with Ilya Grebnov | Room 615:616
“What’s new in Dataverse & AI Builder: How to easily build generative AI business applications” with Yogi Naik | Room 612
“Building the apps of the future today with Power Platform and Copilot” with Leon Welicki | Room 608
PWRCON breakout sessions + workshop
“Copilot is beside me along my RPA journey” with Taiki Yoshida and Chris Garty | Room 615:616
“What’s New with Copilot Studio” with Dewain Robinson and Pawan Taparia | Room 615:616
“Extending Microsoft Copilot products using Copilot Studio” with Dewain Robinson and Pawan Taparia | Room 609
“Deep dive into building Copilots with Copilot Studio” with Dewain Robinson and Pawan Taparia | Room 606
“Extend Copilot Studio with intelligent actions, workflows from Power Automate” with Matt Townsend and Harysh Menon | Room 619:620
“Extend Copilot for Sales using Copilot Studio to empower sales teams with data and insights” with Bharath Varadarajan | Room 609
“Process mining with Copilot and AI: A new frontier for business intelligence” with Heather Orta-Olmo and Derah Onuorah | Room 615:616
“Dataverse: Safeguard AI-enabled Enterprise Applications and Copilots” with Mihaela Blendea | Room 619:620
“Power Pages overview and roadmap” with Meera Mahabala | Room 619:620
“Using your enterprise knowledge for building Q&A experiences in Copilot” with Julie Koesmarno | Room 604
“Securing and governing the Power Platform at scale” with Zohar Raz | Room 609
Microsoft Fabric and Power BI sessions + workshop
“Unlocking insights with Power BI Copilot” with Shannon Lindsay and Alex Powers | Room 609
“Building a modern Data Lake with OneLake: The OneDrive for data” with Josh Caplan | Room 611
“Driving productivity and a data-driven culture with Power BI in Microsoft 365” with Alex Powers and Shannon Lindsay | Room 619:620
“Transform Your Power BI data in Microsoft Fabric” with John White and Jason Himmelstein | Room 611
“Source Control with Power BI and Microsoft Fabric” with John White and Jason Himmelstein | Room 609
“Deep Dive on Power BI, Teams and SharePoint” with John White and Jason Himmelstein | Room 609
“From SQL developer to business analyst: Harnessing Fabric’s innovations” with Charles Webb | Room 612
WORKSHOP | “Everything You Wanted to Know About Power BI… but were afraid to ask!” with John White and Jason Himmelstein | Room 607
Register today | Note: Use the MSCMTY discount code to save $200 USD off registration.
Get the most out of TechCon365: Our top five tips while attending
Introduce yourself | Unique perspectives await, including yours.
Attend as much as you can | Laptops down, eyes open – depth learning, tips, and tricks abound.
Share what you know |Your knowledge saves time – pay it forward.
Ask questions, share feedback | Your issues and ideas Inform us and influence the roadmap.
Hydrate and dress for steps | Keep the brain healthy and mind active.
BONUS | Update your LinkedIn profile and photo | Best reflect your professional experience and growing technical aptitude.
Learn more
Visit TechCon365.com/Seattle and follow the action on X/Twitter: @TechCon365, @Microsoft365, @MSFTCopilot, @SharePoint, @OneDrive, @MicrosoftTeams, @MSPowerPlat, @Microsoft365Dev, and @MSFTAdoption.
I hope you will join us in Seattle, WA for what will be a fantastic week in the PNW! We’re looking forward to the action alongside the community, MVPs, and Microsoft product members from Copilot, Teams, Office, SharePoint, OneDrive, Loop, Viva, Power Platform, Lists, Planner, and more.
Remember, use the discount code MSCMTY discount code to save $200 USD off your conference registration. Register today!
Last, a glimpse of the TechCon365/PWRCON event experience:
Cheers and see you there,
Mark Kashman, Senior product manager – Microsoft
Microsoft Tech Community – Latest Blogs –Read More
Windows Server 2012 manual patching
Hello Team,
I have 2 new servers – Windows Server 2016 standard and Windows 2012 R2 standard.
I need to install security patches manualy(download from interenet, copy and install) as there is no access to internet and we don’t have any patching tool.
For Windows 2016 standard I will install latest Cumulative Update and the latest Service Stack Update. I think it is enough.
But what about Windows Server 2012 R2 standard? Which security patches should I install to have this server up-to-date?
Thank you in advance for help.
Hello Team,I have 2 new servers – Windows Server 2016 standard and Windows 2012 R2 standard.I need to install security patches manualy(download from interenet, copy and install) as there is no access to internet and we don’t have any patching tool.For Windows 2016 standard I will install latest Cumulative Update and the latest Service Stack Update. I think it is enough.But what about Windows Server 2012 R2 standard? Which security patches should I install to have this server up-to-date?Thank you in advance for help. Read More
Copilot for 3rd party system – Advice needed
I am currently working on creating a Copilot intended to be used as a tool for employees to access and retrieve information about customers and the insurances they have in a 3rd party, non-Microsoft, system.
I’m struggling with finding information about some functionalities and best practices and would greatly appreciate your advice:
The insurances, customer, and claims are queryable via an API and events on a service bus upon changes – we do not have access to the databaseThe insurances need to be correlated with the corresponding terms & conditions, which are available in PDFs in a blob-store or Sharepoint.Depending on if it is a customer, or a internal administrator, only the relevant insurances/claims-data should be part of the dataset included in the responseIf an insurance is created for a customer, it should be part of the dataset “near realtime”.
A quick response time is crucial, which means pre-indexing data is a necessity.
Ideally, the Copilot should operate swiftly and accurately, but I am also tasked with creating a solution that is easy to set up and maintain. We’re deciding between using Copilot and AI Studio.
What would be the easiest way to implement this, and what would be the best way?
Thank you,
Malin
I am currently working on creating a Copilot intended to be used as a tool for employees to access and retrieve information about customers and the insurances they have in a 3rd party, non-Microsoft, system. I’m struggling with finding information about some functionalities and best practices and would greatly appreciate your advice: The insurances, customer, and claims are queryable via an API and events on a service bus upon changes – we do not have access to the databaseThe insurances need to be correlated with the corresponding terms & conditions, which are available in PDFs in a blob-store or Sharepoint.Depending on if it is a customer, or a internal administrator, only the relevant insurances/claims-data should be part of the dataset included in the responseIf an insurance is created for a customer, it should be part of the dataset “near realtime”.A quick response time is crucial, which means pre-indexing data is a necessity. Ideally, the Copilot should operate swiftly and accurately, but I am also tasked with creating a solution that is easy to set up and maintain. We’re deciding between using Copilot and AI Studio.What would be the easiest way to implement this, and what would be the best way?Thank you,Malin Read More
Unable to fetch more than 5000 records from filtered view
I have a SharePoint List and created filtered views of the list which contains more than 5000 records . I tried to retrieve the records and getting an error like “The attempted operation is prohibited because it exceeds the list view threshold”. Can anyone help me to get data using pagination like batch by batch in loop? If possible share the code snippet for python.
I have a SharePoint List and created filtered views of the list which contains more than 5000 records . I tried to retrieve the records and getting an error like “The attempted operation is prohibited because it exceeds the list view threshold”. Can anyone help me to get data using pagination like batch by batch in loop? If possible share the code snippet for python. Read More
How to change SP online site domain
Hi,
I wanted to change SP online domain url from https://contoso.sharepoint.com to https://contoso.com
please suggest if need any higher license or anyhow possible?
Thanks,
Deepak
Hi,I wanted to change SP online domain url from https://contoso.sharepoint.com to https://contoso.complease suggest if need any higher license or anyhow possible? Thanks,Deepak Read More
Different meeting stage for host and guest
Environment: teams web and desktop (new version 1415/24031414721), asp core 6, third party cookies
host = user that starts teams app, logs in (authenticates) and invokes shareAppContentToStage
guest = all others users
1. manifest loads sidebar via manifest config (for example configurationURL: https://test.de/sidebar)
2. sidebar contains Login button with authenticates against https://test.de/login
3. sucessfull Login (cookie authentication) in iFrame => sidebar will redirected to https://test.de/host
4. javascript of host calls “microsoftTeams.meeting.shareAppContentToStage(handleShareScreenAction, `https://test.de/guestorhost`);”
5. endpoint /guestorhost checks if request is authenticated (in this example only user with sidebar logged in)
5.1 If request authenticated => redirect to ‘https://test.de/hoststage‘
5.2 If request NOT authenticated => redirect to ‘https://test.de/gueststage‘
Expected result: host should always get /hoststage, guest should always get /gueststage
Current result: sometimes it is working probably, sometimes host gets /gueststage or nothing
My guess is that third party cookies is not working stable and sometimes they are send and sometimes not.
Environment: teams web and desktop (new version 1415/24031414721), asp core 6, third party cookies host = user that starts teams app, logs in (authenticates) and invokes shareAppContentToStageguest = all others users 1. manifest loads sidebar via manifest config (for example configurationURL: https://test.de/sidebar)2. sidebar contains Login button with authenticates against https://test.de/login3. sucessfull Login (cookie authentication) in iFrame => sidebar will redirected to https://test.de/host4. javascript of host calls “microsoftTeams.meeting.shareAppContentToStage(handleShareScreenAction, `https://test.de/guestorhost`);”5. endpoint /guestorhost checks if request is authenticated (in this example only user with sidebar logged in) 5.1 If request authenticated => redirect to ‘https://test.de/hoststage’ 5.2 If request NOT authenticated => redirect to ‘https://test.de/gueststage’Expected result: host should always get /hoststage, guest should always get /gueststageCurrent result: sometimes it is working probably, sometimes host gets /gueststage or nothing My guess is that third party cookies is not working stable and sometimes they are send and sometimes not. Read More
Microsoft Store latest changes with app downloads
Just wondering if any other IT admins are dealing with the recent changes to the Microsoft Store app downloads and how instead of launching the full MS Store, it now launches a containerized version. While this does lead to a better user experience, the unfortunate side effect is it bypasses our current restriction to block the MS Store from users.
We currently control all our MS Store apps in Intune via the Company Portal and have the policy to “block MS Store” enabled. However, the change where you can download the apps via a self contained EXE on the website now bypasses this block, presumably because the containerized version of the app installer is not referencing any of these policies.
Can Microsoft please address this? We don’t really want to block apps.microsoft.com but if this behavior isn’t changed this might be the end result.
Just wondering if any other IT admins are dealing with the recent changes to the Microsoft Store app downloads and how instead of launching the full MS Store, it now launches a containerized version. While this does lead to a better user experience, the unfortunate side effect is it bypasses our current restriction to block the MS Store from users. We currently control all our MS Store apps in Intune via the Company Portal and have the policy to “block MS Store” enabled. However, the change where you can download the apps via a self contained EXE on the website now bypasses this block, presumably because the containerized version of the app installer is not referencing any of these policies. Can Microsoft please address this? We don’t really want to block apps.microsoft.com but if this behavior isn’t changed this might be the end result. Read More
Why comments are not imported into Planner from Trello with apps4.Pro?
Hello, I am using the apps4.pro tool to transfer content from my team’s Trello to a new Planner plan. Every time I do this with the administrator, the comments that were made on Trello are not imported into Planner even though the tool’s documentation confirms that the comments will also be transferred. We exported the Trello cards with all their content into a JSON file which is normally supported by apps4.Pro. In the JSON file, comments are correctly saved in the JSON file exported from Trello and are accessible via a key named “text”. Is there a particular format for Planner to recognize and import comments?
Hello, I am using the apps4.pro tool to transfer content from my team’s Trello to a new Planner plan. Every time I do this with the administrator, the comments that were made on Trello are not imported into Planner even though the tool’s documentation confirms that the comments will also be transferred. We exported the Trello cards with all their content into a JSON file which is normally supported by apps4.Pro. In the JSON file, comments are correctly saved in the JSON file exported from Trello and are accessible via a key named “text”. Is there a particular format for Planner to recognize and import comments? Read More
Decomissioning a single not anymore used Exchange Server 2013
About 1,5 years ago we moved all the email database to MS365 and since then used online the cloud solution. This was and is not a hybrid connection between the on-premises Exchange server and MS365. Now I want to delete all mailboxes of on-premises server and uninstall it.
I am looking at this tutorial in this relation: https://techcommunity.microsoft.com/t5/exchange-team-blog/decommissioning-exchange-server-2013/ba-p/3613793. With some differences (e.g. disabling and deleting mailboxes instead of migrating them) this seems to be a good way. But another tutorial suggests that a single server should be left for management (though this tutorial considers a hybrid installation with 2016/2019 EX-srv): https://www.alitajran.com/remove-last-exchange-server/
My question is: How do I remove (or at least minimize) a no-more used Exchange Server 2013 installation und Windows Server 2012 R2.
Thank you in advance.
About 1,5 years ago we moved all the email database to MS365 and since then used online the cloud solution. This was and is not a hybrid connection between the on-premises Exchange server and MS365. Now I want to delete all mailboxes of on-premises server and uninstall it. I am looking at this tutorial in this relation: https://techcommunity.microsoft.com/t5/exchange-team-blog/decommissioning-exchange-server-2013/ba-p/3613793. With some differences (e.g. disabling and deleting mailboxes instead of migrating them) this seems to be a good way. But another tutorial suggests that a single server should be left for management (though this tutorial considers a hybrid installation with 2016/2019 EX-srv): https://www.alitajran.com/remove-last-exchange-server/My question is: How do I remove (or at least minimize) a no-more used Exchange Server 2013 installation und Windows Server 2012 R2.Thank you in advance. Read More
MCP Certification Transcript not Found on my MCID
Hello,
I received MCP in 2006 but i have not logged in to the platform for many years. I was able to recover the account could using my MCID but despite merging the account to with Microsoft learn several days ago, i could not find my MCP SQL transcipt anywhere.
Is there a way to retrieve a copy of my transcript?
Thank you for your help
Hello,I received MCP in 2006 but i have not logged in to the platform for many years. I was able to recover the account could using my MCID but despite merging the account to with Microsoft learn several days ago, i could not find my MCP SQL transcipt anywhere. Is there a way to retrieve a copy of my transcript? Thank you for your help Read More
Power Query only returning 500,000 rows of data into excel
I have a Power Query that connects to an Azure Log Analytics workspace pulling back data which I then populate an excel spreadsheet with, and generate graphs and pivot tables.
I have just noticed that the records returned into Excel caps out at 500,000 records, and I know that there are more than 500,000 records.
Is there a limit? I can’t figure out if it’s my query, or something else.
I have a Power Query that connects to an Azure Log Analytics workspace pulling back data which I then populate an excel spreadsheet with, and generate graphs and pivot tables. I have just noticed that the records returned into Excel caps out at 500,000 records, and I know that there are more than 500,000 records. Is there a limit? I can’t figure out if it’s my query, or something else. Read More
Improving RAG performance with Azure AI Search and Azure AI prompt flow in Azure AI Studio
Content authored by: Arpita Parmar
Introduction
If you’ve been delving into the potential of large language models (LLMs) for search and retrieval tasks, you’ve probably encountered Retrieval Augmented Generation (RAG) as a valuable technique. RAG enriches LLM-generated responses by integrating relevant contextual information, particularly when connected to private data sources. This integration empowers the model to deliver more accurate and contextually rich responses.
Challenges with RAG evaluation
Evaluating RAG poses several challenges, requiring a multifaceted approach. Evaluating both response quality and retrieval effectiveness are in ensuring optimal performance.
Traditional evaluation metrics for RAG applications, while useful, have certain limitations that can impact their effectiveness in accurately assessing RAG performance. Some of these limitations include:
Inability to fully capture user intent: Traditional evaluation metrics often focus on lexical and semantic aspects but may not fully capture the underlying user intent behind a query. This can result in a disconnect between the metrics used to evaluate RAG performance and the actual user experience.
Reliance on ground truth: Many traditional evaluation metrics rely on the availability of a pre-defined ground truth to compare system-generated responses against. However, establishing ground truth can be challenging, particularly for complex queries or those with multiple valid answers. This can limit the applicability of these metrics in certain scenarios.
Limited applicability across different query types: Traditional evaluation metrics may not be equally effective across different query types, such as fact-seeking, concept-seeking, or keyword queries. This can result in an incomplete or skewed assessment of RAG performance, particularly when dealing with diverse query types.
Overall, while traditional evaluation metrics offer valuable insights into RAG performance, they are not without their limitations. Incorporating user feedback into the evaluation process adds another layer of insight, bridging the gap between quantitative metrics and qualitative user experiences. Therefore, adopting a multifaceted approach that considers retrieval quality, relevance of response to retrieval, user intent, ground truth availability, query type diversity, and user feedback is essential for a comprehensive and accurate evaluation of RAG systems.
Improving RAG Application’s Retrieval with Azure AI Search
When evaluating RAG applications, it is crucial to accurately assess retrieval effectiveness and to tune relevance of retrieval. Since the retrieved data is key for the successful implementation of the RAG pattern, a retrieval system that can significantly enhance the quality of your results is integration of Azure AI Search. Even if AI Search offers keyword (full-text), vector and hybrid search capabilities, this post will be focused on using hybrid search. The hybrid search approach can be particularly beneficial in scenarios where the retrieval performance is varied or insufficient. By integrating both keyword and vector-based search techniques, hybrid search can improve the accuracy and completeness of the retrieved documents, which in turn can positively impact the relevance of the generated responses.
The hybrid search process in Azure AI Seach involves the following steps:
Keyword search: Initial keyword index search to find documents containing the query terms using BM25 ranking algorithm.
Vector search: In parallel, vector search uses dense vector representations to map the query to semantically similar documents, leveraging embeddings in vector fields using Hierarchical Navigable Small World (HNSW) or exhaustive k-nearest neighbors (KNN) algorithm.
Result merging: The results from both keyword and vector searches are merged using a Reciprocal Rank Fusion (RRF) algorithm.
Enhancing Retrieval: Quality of retrieval & relevance tuning
When turning for retrieval relevance and quality of retrieval, there are several strategies to consider:
Document processing: Experiment with chunk size and overlap to preserve context or continuity between chunks.
Document understanding: Embeddings play a pivotal role in enabling pipelines to understand documents in relation to user queries. By transforming documents and queries into dense vector representations, embeddings facilitate the measurement of semantic similarity between them. Consider selecting an appropriate embedding model. For example, higher-dimensional embeddings can store more context information but may require more computational resources, while smaller-dimensional embeddings are more efficient but may sacrifice some context.
Vector search configuration: Think of this configuration like building a map. Adjusting this parameter helps the algorithm decide how many landmarks to use and how far apart they should be, which can affect how quickly and accurately it finds relevant information. Adjust the efConstruction parameter for HNSW to change the internal composition of the proximity graph. This parameter changes the way the search algorithm organizes information internally.
Query-time parameter: Increase the number of results (k) to feed more search results. It determines how many search results are returned for each query. Increasing k means the system will provide more potential matches, which can be useful if you’re trying to find the best answer among many possibilities.
Enhancing hybrid search with Semantic re-ranking: To further enhance the quality of search results, a semantic re-ranking step can be added. Also known as L2, this layer takes a subset of the top L1 results and computes higher-quality relevance scores to reorder the result set. The L2 ranker can significantly improve the ranking of results already found by the L1, critical for RAG applications to ensure the best results are in the top positions. In Azure Search, this is done using a semantic ranker developed in partnership with Bing, which leverages vast amounts of data and machine learning expertise. The re-ranking step helps optimize relevance by ensuring that the most related documents are presented at the top of the list.
By unifying these retrieval techniques and configurations, hybrid search can handle queries more effectively compared to using just keywords or vectors alone. It excels at finding relevant documents even when users query with concepts, abbreviations or phraseology different from the documents.
A recent Microsoft study highlights that hybrid search with semantic re-ranking outperforms traditional vector search methods like dense and sparse passage retrieval across diverse question-answering tasks.
According to this study, key advantages with hybrid search with semantic re-ranking include:
Higher answer recall: Returning higher quality answers more often across varied question types.
Broader query coverage: Handling abbreviations, rare terms that vector search struggle with.
Increased precision: Merged results combining keyword statistics and semantic relevance signals.
Now that we’ve covered retrieval tuning, let’s turn our attention to evaluating generation and streamlining the RAG pipeline evaluation process. Azure AI prompt flow offers comprehensive framework to streamline RAG evaluation.
Azure AI prompt flow
Prompt flow streamlines RAG evaluation with multifaceted approach by efficiently comparing prompt variations, integrating user feedback, and supporting both traditional and AI-generated metrics that don’t require ground truth data. It ensures tailored responses for diverse queries, simplifying retrieval and response evaluation while providing comprehensive insights for improved RAG performance.
Both Azure AI Search and Azure AI prompt flow are available in Azure AI Studio, a unified platform for responsibly developing and deploying generative AI applications. The one-stop-shop platform enables developers to explore the latest APIs and models, access comprehensive tooling to support the generative AI development lifecycle, design applications responsibly, and deploy and scale models, flows and apps at scale with continuous monitoring.
With Azure AI Search, developers can connect models to their protected data for advanced fine-tuning and contextually relevant retrieval augmented generation. With Azure AI prompt flow, developers can orchestrate AI workflows with prompt orchestration, interactive visual flows, and code-first experiences to build sophisticated and customized enterprise chat applications.
Here is a video of how to build and deploy an enterprise chat application with Azure AI Studio.
Evaluating RAG applications in prompt flow revolves around three key aspects:
Prompt variations: Prompt variation testing, informed by user feedback, ensures tailored responses for diverse queries, enhancing user intent understanding and addressing various query types effectively.
Retrieval evaluation: This involves assessing the accuracy and relevance of the retrieved documents.
Response evaluation: The focus is on measuring the appropriateness of the LLM-generated response when provided with the context.
Below is the table of evaluation metrics for RAG applications in Prompt flow.
Metric Type
AI Assisted/Ground Truth Based
Metric
Description
Generation
AI Assisted
Groundedness
Measures how well the model’s generated answers align with information from the source data (user-defined context).
Generation
AI Assisted
Relevance
Measures the extent to which the model’s responses generated are pertinent and directly related to the given questions.
Retrieval
AI Assisted
Retrieval Score
Measures the extent to which the model’s retrieved documents are pertinent and directly related to the given questions.
Generation
Ground Truth Based
Accuracy, Precision, Recall, F1 score
Measures the RAG system’s responses to a set of predefined, correct answers. Measures the ratio of the number of shared words between the model generation and the ground truth answers.
There are 3 AI assisted metrics available in prompt flow that do not require ground truth. Traditional metrics based on ground truth are useful while testing RAG applications in development, but AI-assisted metrics offer enhanced capabilities for evaluating user responses, especially in situations where ground truth data is unavailable. These metrics provide valuable insights into the performance of the RAG Application in real-world scenarios, enabling more comprehensive assessment of user interactions and system behavior. These are those metrics:
Groundedness: Groundedness ensures that the responses from the LLM align with the context provided and are verifiable against the available sources. It confirms factual accuracy and ensures that the conversation remains grounded when all responses meet this criterion.
Relevance: Relevance measures the appropriateness of the generated answers to the user’s query based on the retrieved documents. It assesses whether the response provides sufficient information to address the question and adjusts the score accordingly if the answer lacks relevance or contains unnecessary details.
Retrieval Score: The retrieval score reflects the quality and relevance of the retrieved documents to the user’s query. It breaks down the user query into intents, assesses the presence of relevant information in the retrieved documents, and calculates the fraction of intents with affirmative responses to determine relevance.
Groundedness, relevance, and the retrieval score along with prompt variant testing from prompt flow collectively provide insights into the performance of RAG applications. It enables refinement of RAG Applications, addressing challenges associated with information overload, incorrect response, insufficient retrieval and ensuring more accurate responses throughout the end-to-end evaluation process.
Potential scenarios to evaluate RAG workflows
Now, let’s explore 3 potential scenarios to evaluate RAG workflows and how prompt flow and Azure AI Search help in evaluating those scenarios.
Scenario 1: Successful Retrieval and Response
This scenario entails the seamless integration of relevant contextual information with accurate and appropriate responses generated by RAG application. We have good response and good retrieval.
In this scenario, all three metrics perform optimally. Groundedness ensures factual accuracy and verifiability, relevance ensures the appropriateness of the answer to the query, and the retrieval score reflects the quality and relevance of the retrieved documents.
Scenario 2: Inaccurate Response, Insufficient Retrieval
Here, despite the retrieval of relevant documents, the response from LLM is inaccurate. Groundedness may suffer if the response lacks verifiability against the provided sources. Relevance may also be compromised if the response does not adequately address the user’s query. The retrieval score might indicate successful document retrieval but fails to capture the inadequacy of the response.
To address this challenge, Azure AI Search retrieval tuning can be leveraged to enhance the retrieval process, ensuring that the most relevant and accurate documents are retrieved. By fine-tuning the search parameters discussed above in section “Enhancing Retrieval: Quality of retrieval & relevance tuning,” Azure AI Search can significantly improve the retrieval score, thereby increasing the likelihood of obtaining relevant documents for the given query.
Additionally, you can refine the LLM’s prompt by incorporating a conditional statement within the prompt template, such as “if relevant content is unavailable and no conclusive solution is found, respond with ‘unknown’.” Leveraging prompt flow, which allows for the evaluation and comparison of different prompt variations, you can assess the merit of various prompts and select the most effective one for handling such situations. This approach ensures accuracy and honesty in the model’s responses, acknowledging its limitations and avoiding the dissemination of inaccurate information.
Scenario 3: Incorrect Response, Varied Retrieval Performance
In this scenario, the retrieval of relevant documents is followed by an inaccurate response from the LLM. Groundedness may be maintained if the responses remain verifiable against the provided sources. However, relevance is compromised as the response fails to address the user’s query accurately. The retrieval score might indicate successful document retrieval, but the flawed response highlights the limitations of the LLM.
Evaluation in this scenario involves several key steps facilitated by Azure AI prompt flow and Azure AI Search:
Acquiring Relevant Context: Embedding a user query to search a vector database for pertinent chunks is crucial. The success of retrieval relies on the semantic similarity of these chunks to the query and their ability to provide relevant information for generating accurate responses (see section “Enhancing Retrieval: Quality of retrieval & Relevance Tuning”).
Optimizing Parameters: Adjusting parameters such as retrieval type (hybrid, vector, keyword), chunk size, and K value is necessary to enhance RAG application performance. (see section “Enhancing Retrieval: Quality of retrieval & Relevance Tuning”).
Prompt Variants: Utilizing prompt flow, developers can test and compare various prompt variations to optimize response quality. By iterating prompt templates and LLM selections, prompt flow enables rapid experimentation and refinement of prompts, ensuring that the retrieved content is effectively utilized to produce accurate responses. (see section “How to evaluate RAG with Azure Machine Learning prompt flow”).
Refining Response Generation Strategies: Moreover, exploring different text extraction techniques and embedding models alongside experimenting with chunking strategies can further improve overall RAG performance. (see section “Enhancing Retrieval: Quality of retrieval & Relevance Tuning”).
How to evaluate RAG with Azure AI prompt flow
In this section, let’s walk through the step-by-step process of testing RAG using prompt variants with the prompt flow using metrics such as groundedness, relevance, and retrieval score.
Prerequisite: Build RAG using Azure Machine Learning prompt flow.
1. Prepare Test Data: Ideally, you should prepare a test dataset of 50-100 samples but for this article we will prepare a test dataset with a few samples. Save this as a csv file.
2. Add test data to Azure AI Studio: In your AI Studio project, under Components, select Data -> New data.
3. Select Upload files/folders and upload the test data from a local drive. Click on Next, provide a name to your data location and click on Create.
4. Once the test data is uploaded you can see its details.
5. Evaluate the flow: Under Tools -> Evaluation, click on New evaluation. Choose Conversation with context and select a flow you want to evaluate. Here we are testing two variants of prompt: Variant_0 and Variant_1. Click on Next.
6. Configure the test data. Click on Next.
7. Under Select Metrics RAG metrics are automatically selected based on the scenario you have chosen. Refer to more details of metrics. Choose your Azure OpenAI Service instance and model and click on Next.
8. Review and finish. Click on Submit.
9. Once the evaluation is complete it will be displayed under Evaluations.
10. Check the results by clicking on the evaluation. You can compare the two variants of prompts by comparing their metrics to see which prompt variant is performing better.
11. You can check the result of individual prompt variant evaluation metrics under the Output tab -> Metrics dashboard.
12. Also, under the Output tab, you can also see a detailed view of the metrics under Detailed metrics result.
13. Under the Trace tab, you can trace how many tokens were generated and the duration for each test question.
Conclusion:
The integration of Azure AI Search into the RAG pipeline can significantly improve retrieval scores. This enhancement ensures that retrieved documents are more aligned with user queries, thus enriching the foundation upon which responses are generated. Furthermore, by integrating Azure AI Search and Azure AI prompt flow in Azure AI Studio, developers can test and optimize response generation to improve groundedness and relevance. This approach not only elevates RAG application performance but also fosters more accurate, contextually relevant, and user-centric responses.
Microsoft Tech Community – Latest Blogs –Read More
8.5.2024 Copilot Business Case Builder – kuinka laskea hyödyt auki
Copilot Business Case Builder – kuinka laskea liiketoimintahyödyt asiakkaan johtoryhmälle -webinaari 8.5.2024?
Webinaari 8.5.2024 klo 9-10.
Rekisteröidy tästä linkistä.
Kaipaako asiakkaasi jotain muuta perustelua kuin ajansäästöt? Tuntuuko, että asiakkaasi jarruttelevat investointipäätöksissä?
Ainutlaatuinen mahdollisuus jokaiselle tulla kuulemaan parhaat vinkit Copilot for Microsoft 365 -ratkaisun myyntiin. Tämän kertaisessa webinaarissa annamme avaimet myynnin haasteisiin, kun Microsoftin Business Case Builder guru Benny van Well tulee kertomaan teille uudesta tavasta laskea Copilotin arvo ja hyödyt asiakkaalle.
Asiakkaat (ja sinä) tarvitsevat enemmän kuin aikasäästöjä. Webinaarissa Benny kertoo miksi ja miten liiketoimintahyödyt lasketaan auki, jotta asiakkaan johtoryhmä pystyy tekemään investointipäätöksen. Copilot keskusteluthan pitäisi aina viedä johtoryhmälle eikä asiakkaan IT-funktioon.
Tämän webinaarin päätteeksi kaikki osallistujat ovat saaneet tarvittavat valmiudet ja ymmärryksen siitä, kuinka Copilotin investointipäätös voidaan perustella asiakkaalle.
Tutustu ennen webinaaria tähän materiaaliin: Microsoft Business Case Builder
Tämä webinaari pidetään poikkeuksellisesti englanniksi!
Tallenne on katsottavissa jälkikäteen samasta rekisteröitymislinkistä CLoud Championissa!
Copilot Business Case Builder – how to calculate business benefits
Register using this link.
Do your customers need any other justification than time savings? Does it feel like your customers are hesitating in investment decisions?
A unique opportunity for everyone to come and hear the best tips for selling the Copilot for Microsoft 365 solution. In this webinar, we will give you the keys to sales challenges when Benny van Well, the Microsoft Business Case Builder guru, will tell you about a new way to calculate the value and benefits of Copilot for the customer.
Customers (and you) need more than time savings. In the webinar, Benny will explain why and how business benefits are calculated, so that the customer’s executive team can make an investment decision. Copilot discussions should always be brought to the executive team and not to the customer’s IT function.
After this webinar, all participants will have the necessary readiness and understanding of how to justify the Copilot investment decision to the customer.
Please familiarize yourself with this material before the webinar: Microsoft Business Case Builder
This webinar will be held exceptionally in English!
Recording available afterwards in Cloud Champion using the registration link!
Microsoft Tech Community – Latest Blogs –Read More
Office Activations per user with devices specified
Hi,
Is it possible to get a report over all users who have activated Office including the name of the device (Windows, Apple, Android device name)?
I know that I can go on each user, one by one, to get that information, but having a report will be usefull when searching for a specific computer name
Thank you for your reply 🙂
Regards,
José
Hi, Is it possible to get a report over all users who have activated Office including the name of the device (Windows, Apple, Android device name)?I know that I can go on each user, one by one, to get that information, but having a report will be usefull when searching for a specific computer nameThank you for your reply 🙂 Regards, José Read More
Multiple conditions case
Working on excel workbook where I want to see a value(Column1) in Column 12 if value in Column 3= C1 and If value in Column 2 is >=C31 and if value in Column 2 is <=D3 . Above 3 conditions which are true only then I want to see value in Column 1 otherwise I want to see “NO” in Column 1.
I tried formula (IFS([@Column3]=’Shift Pattern’!C1,[@Column1],[@Column2]>’Shift Pattern’!$C$3,[@Column1],[@Column2]<‘Shift Pattern’!$D$3,[@Column1])
Above formula shows Column 1 value even if one of the above condition is true and ignore others.
Can you please tell me how I can apply above mentioned condition?
Either I am apply formula wrong or I am apply wrong formula. What is the case?
Working on excel workbook where I want to see a value(Column1) in Column 12 if value in Column 3= C1 and If value in Column 2 is >=C31 and if value in Column 2 is <=D3 . Above 3 conditions which are true only then I want to see value in Column 1 otherwise I want to see “NO” in Column 1. I tried formula (IFS([@Column3]=’Shift Pattern’!C1,[@Column1],[@Column2]>’Shift Pattern’!$C$3,[@Column1],[@Column2]<‘Shift Pattern’!$D$3,[@Column1]) Above formula shows Column 1 value even if one of the above condition is true and ignore others. Can you please tell me how I can apply above mentioned condition? Either I am apply formula wrong or I am apply wrong formula. What is the case? Read More
add additional horizontal line on graph
Hello,
I am wanting to add an additional horizontal line on this graph, it would be linear and at y=24.8. Seems a simple problem but cant figure out how to do it. I have attached my graph and table data.
Thanks
Hello,I am wanting to add an additional horizontal line on this graph, it would be linear and at y=24.8. Seems a simple problem but cant figure out how to do it. I have attached my graph and table data.Thanks Read More
Azure AD Assessment Tool from Microsoft not working anymore because of “disabled” enterprise app
Hi everyone,
i was using https://github.com/AzureAD/AzureADAssessment for some time now to easy get a good list of all high privileged users and enterprise app.
But it does not work anymore because MS disabled their own enterprise app due to service violations.
Creating an own app seems to be easy with the help of a user here:
This application has been disabled by Microsoft · Issue #89 · AzureAD/AzureADAssessment (github.com)
But i end up with:
Original exception: AADSTS7000218: The request body must contain
the following parameter: ‘client_assertion’ or ‘client_secret’.
I already selected “Allow public client flows” and added the Redirect URI “https://login.microsoftonline.com/common/oauth2/nativeclient”
Can anyone help me out or do i need another tool?
BR
Stephan
Hi everyone, i was using https://github.com/AzureAD/AzureADAssessment for some time now to easy get a good list of all high privileged users and enterprise app.But it does not work anymore because MS disabled their own enterprise app due to service violations. Creating an own app seems to be easy with the help of a user here:This application has been disabled by Microsoft · Issue #89 · AzureAD/AzureADAssessment (github.com)But i end up with:Original exception: AADSTS7000218: The request body must containthe following parameter: ‘client_assertion’ or ‘client_secret’. I already selected “Allow public client flows” and added the Redirect URI “https://login.microsoftonline.com/common/oauth2/nativeclient” Can anyone help me out or do i need another tool? BRStephan Read More
Monitor SharePoint access
Is there a way to monitor or get alerts when a SharePoint site changes its permissions? For example, if someone new gets added to a SharePoint group or the permissions for the site changes. I’ve tried using Microsoft Purview alerts, but after setting up a few alerts several days ago, it doesn’t seem to be working. I’m not sure if these alerts just aren’t working or I set it up wrong? Is there some other tool I can look into? The only other thing I can think of is a flow to run a report or maybe a Power BI report showing the users and groups.
Is there a way to monitor or get alerts when a SharePoint site changes its permissions? For example, if someone new gets added to a SharePoint group or the permissions for the site changes. I’ve tried using Microsoft Purview alerts, but after setting up a few alerts several days ago, it doesn’t seem to be working. I’m not sure if these alerts just aren’t working or I set it up wrong? Is there some other tool I can look into? The only other thing I can think of is a flow to run a report or maybe a Power BI report showing the users and groups. Read More
Deploy a Gradio Web App on Azure with Azure App Service: a Step-by-Step Guide
Context
Gradio is an open-source Python package that you can use for free to create a demo or web app for your machine learning model, API, Azure AI Services integration or any Python function. You can run Gradio in Python notebooks or on a script. A Gradio interface can automatically create a public link, so you can then share a link to your demo or web app easily with Gradio’s sharing features. A share link usually looks like this: https://07ff8706ab.gradio.live . This link uses the Gradio Share Servers, but these servers only forward your local server, and do not keep any data sent through your app. Share links are valid for 72 hours. For a more stable way to build a demo app, we suggest using Azure App Service. App Service is a Platform as a Service (PaaS) offering from Microsoft. It allows us to host web applications, REST API’s and backend services for mobile applications. We can host web applications and services that are made with multiple programming languages or frameworks including, .NET, Java, Python etc. This document gives you a detailed guide on how to get your gradio application working on Azure. Up we go!
Run your project locally
Any IDE will work, but we recommend using VSCode, because it has many features that make it easy to create a virtual environment, deploy your project to azure and run a local server. Download the Visual Studio Code installer for Windows. When the download is done, run the installer (VSCodeUserSetup-{version}.exe). It will take a minute or less. VS Code will be installed in C:Users{Username}AppDataLocalProgramsMicrosoft VS Code by default. During the installation, don’t forget to select the “Add Open with Code Action”.
As an example, we will show this basic gradio app that shows the hello message to user.
import gradio as gr
with gr.Blocks() as demo:
gr.Markdown(“Hi friends!”)
demo.launch(share=True)
You can run this code as a Python terminal or as a Jupyter notebook cell.
However, to get the Gradio app to work, we need to connect a gradio.Blocks to a FastAPI application that is already in place.
Begin by creating a virtual environment. To achieve this, you need to install the library first.
pip install virtualenv
To create a venv for your project, follow these steps in your terminal: make a new project folder, use cd to go to the project folder, and run this command
cd my-project
python -m venv myenv
myenvScriptsactivate
Otherwise you can create a venv in VSCode, using the command palette : Ctrl + Shift + P -> Python: Create Environment
Now install the libraries
pip install gradio
pip install fastapi
And rewrite your initial gradio code. Create main.py and add the following code :
from fastapi import FastAPI
import gradio as gr
app = FastAPI()
with gr.Blocks() as demo:
gr.Markdown(“Hi friends!”)
app = gr.mount_gradio_app(app, demo, path=”/”)
You are now ready to run your FastAPI application
python main.py
Please note, that when you need to use secrets in your code, you should use the environment variables.
import os
import gradio as gr
with gr.Blocks() as demo:
my_secret_key = os.environ[“MY_SECRET_KEY”]
gr.Markdown(“Hi friends!”)
demo.launch(share=True)
One way to make the environment variable is through the terminal or PC settings, but a better way to set up the debug profile in VScode is to make your development easier. In your .vscode folder, put the launch.json file that has this content:
{
“version”: “0.2.0”,
“configurations”: [
{
“name”: “Python Debugger: FastAPI”,
“type”: “debugpy”,
“request”: “launch”,
“module”: “uvicorn”,
“args”: [
“main:app”,
“–reload”
],
“jinja”: true,
“env”: {
“MY_SECRET_KEY”: “<my secret key value>”
}
}
]
}
This will enable you to launch your local app by using the Run > Start Debugging
Deploy to Azure App Service
Because Azure App Services run in the Linux environment, you need to install the gunicorn package, as this is what the startup command relies on instead of uvicorn.
pip install gunicorn
Use the following command to make a requirements file:
pip freeze > requirements.txt
This will create a file that displays all the packages and their dependencies with their versions, something like this:
aiofiles==23.2.1
altair==5.3.0
annotated-types==0.6.0
anyio==4.3.0
attrs==23.2.0
certifi==2024.2.2
charset-normalizer==3.3.2
click==8.1.7
colorama==0.4.6
contourpy==1.2.1
cycler==0.12.1
fastapi==0.110.1
ffmpy==0.3.2
filelock==3.13.4
fonttools==4.51.0
fsspec==2024.3.1
gradio==4.26.0
gradio_client==0.15.1
gunicorn==21.2.0
Make a new folder called deploy and open it in VSCode. Paste the main.py and requirements.txt files in this folder.
Some of the tutorials suggest creating a Docker image that can then run on App Service. But this is not required. You can also deploy code directly from a local workspace to App Service without making a Docker image.
Before you start, make sure you have the Azure Tools extension pack installed and you are logged into Azure from VS Code. Then go to the Azure portal to create the resource. Sign in to the Azure portal, type app services in the search bar at the top of the portal. Choose the option called App Services under the Services heading on the menu that shows up below the search bar.
On the App Services page, select + Create, then select + Web App from the drop-down menu.
On the Create Web App page, fill out the form as follows.
Resource Group → Select Create new and use your RG name.
Name → you-app-name. This name must be unique across Azure.
Runtime stack → Python 3.11.
Region → Any Azure region near you.
App Service Plan → Under Pricing plan, select Explore pricing plans to select a different App Service plan.
The App Service plan determines the amount of resources (CPU/memory) that your app can use and how much you pay for them.For this example, under Dev/Test, choose the Basic B1 plan. The Basic B1 plan will cost a little bit from your Azure account but is better for performance than the Free F1 plan. When done, select Select to confirm your changes.
At the bottom of the screen on the main Create Web App page, choose the Review + create option. This will bring you to the Review page. To create your App Service, select Create.
Now in VSCode sign to Azure using the command palette (Ctrl + Shift + P)
Then open the Azure extension in VSCode:
Now go to your Web App resource that you made earlier > Right Click > Deploy to Web App
This will start the deployment job
After the deployment is finished, go to the Azure portal, search for the Web Service, select the Settings and input environment variables
And then type the secret name and value as they appear in your local settings in VSCode.
To finish, go to Settings > Configuration > Startup Command and type in this command
python -m gunicorn main:app -k uvicorn.workers.UvicornWorker
To make the web app work properly and recognize the secrets, you have to restart it after setting the environment variables.
To see if the app service is functioning, go to Overview > Default Domain, and you can use this link to access your Web App.
There you have it, your Azure Web App is ready to go. I hope this article was useful.
Microsoft Tech Community – Latest Blogs –Read More
Pivot table grouping and return OK if condition using countif()>0
Hi community, I would need your help here please.
I have a big table with multiple column, However I am focused in 2 main columns, call “Control” & “Status”.
In total i have 100 rows in my table, however I can have the same “Control” repeated in multiple row. I have different type of “Status” that could be: Not Started; In Progress; Completed.
I would need to group each same “Control”, And return a “OK” if for each Control it does NOT have an associated “Not Started”;”In Progress”; means only “Completed” status apply. Otherwise, it will return “NOK”.
I have a built a pivot table to show this in a summary visualization:
My first try was:
1. Build a specific Pivot table, In rows i insert “Status”; in values I insert “Status” but I change the Value Field Settings as Count.
2. Now Insert calculated Field, with the following formula:
=if((countif(status=”Not Started”)+countif(status=”In Progress”))>0;”NOK”;”OK”)
When I click ok, it returns “Too few”.
I don’t find the way to do it…
Can I anyone help me out to solve this need? How would you do it? Is this the best way?
I would need a smart and fast way idea.
Thank you a lot!
BR, Charlie1992
Hi community, I would need your help here please. I have a big table with multiple column, However I am focused in 2 main columns, call “Control” & “Status”.In total i have 100 rows in my table, however I can have the same “Control” repeated in multiple row. I have different type of “Status” that could be: Not Started; In Progress; Completed. I would need to group each same “Control”, And return a “OK” if for each Control it does NOT have an associated “Not Started”;”In Progress”; means only “Completed” status apply. Otherwise, it will return “NOK”. I have a built a pivot table to show this in a summary visualization: My first try was: 1. Build a specific Pivot table, In rows i insert “Status”; in values I insert “Status” but I change the Value Field Settings as Count.2. Now Insert calculated Field, with the following formula: =if((countif(status=”Not Started”)+countif(status=”In Progress”))>0;”NOK”;”OK”) When I click ok, it returns “Too few”.I don’t find the way to do it… Can I anyone help me out to solve this need? How would you do it? Is this the best way? I would need a smart and fast way idea. Thank you a lot! BR, Charlie1992 Read More