Month: August 2024
ERROR 18456
EVENTID: 18456 —- Login failed for user ‘sa’. Reason: Password did not match that for the login provided.
This message is being sent hundreds of times daily to my server tracing back to my computer’s IP address. I’m not sure why it’s trying to log in under “sa” because I’m using windows auth to gain access to the server under a different user. Everything seems to work fine with the exception of very recently my account seems to be creating deadlock sometimes even when not actively running any queries.
Any help would be great.
EVENTID: 18456 —- Login failed for user ‘sa’. Reason: Password did not match that for the login provided. This message is being sent hundreds of times daily to my server tracing back to my computer’s IP address. I’m not sure why it’s trying to log in under “sa” because I’m using windows auth to gain access to the server under a different user. Everything seems to work fine with the exception of very recently my account seems to be creating deadlock sometimes even when not actively running any queries. Any help would be great. Read More
Linked Service to specific ADLS Gen2 container/filesystem
Is it possible to create a Linked Service in Synapse to a specific container/filesystem in an ADLSg2 account?
The ServicePrincipal I’m connecting with is only authorised on a specific container in the storage account, so I’m unable to link to the storage account itself.
If I ignore the Test Connection warning, I can create the LS, and then create a DS towards a file in the container I have permissions to – that works. However, many users give up when they’re unable to test the LS itself, and it would be very nice to be able to browse the container in the Data pane in Synapse, just like you can browse a whole linked ADLSg2 account.
I was hoping there should be an Advanced JSON parameter to define the container/filesystem on LS level, but I’ve been unable to find any in the documentation.
Is it possible to create a Linked Service in Synapse to a specific container/filesystem in an ADLSg2 account? The ServicePrincipal I’m connecting with is only authorised on a specific container in the storage account, so I’m unable to link to the storage account itself. If I ignore the Test Connection warning, I can create the LS, and then create a DS towards a file in the container I have permissions to – that works. However, many users give up when they’re unable to test the LS itself, and it would be very nice to be able to browse the container in the Data pane in Synapse, just like you can browse a whole linked ADLSg2 account. I was hoping there should be an Advanced JSON parameter to define the container/filesystem on LS level, but I’ve been unable to find any in the documentation. Read More
Cannot add custom filters for RefinableString200+
We’re using RefinableStrings 200-219 per our client’s request. We need to create custom filters in M365 Search verticals using these refinables. However, the M365 Search Admin Center only lists RefinableStrings up to 199. I checked for an API or PS script but couldn’t find anything available.
We’re using RefinableStrings 200-219 per our client’s request. We need to create custom filters in M365 Search verticals using these refinables. However, the M365 Search Admin Center only lists RefinableStrings up to 199. I checked for an API or PS script but couldn’t find anything available. Read More
Teams Town Hall – Missing Q&A
We are testing the Town Hall feature of Teams in replacement of Live Events. We are GCC licensed and not finding the Q&A option — either before the meeting starts or after the meeting has started. The option is just not there. We have Q&A turned on in the meetings policy in Teams and it works when we are in Live Events. I found a post that said Viva Engage is needed and then I found another post that said GCC doesn’t have Q&A in Town Hall. Any idea?
In addition, when we end a Town Hall by going to Leave/End Event, the attendee view shows a blank screen with text that reads something like “The host has turned off their camera”. The event doesn’t end until the organizer quits Teams. Any idea on this one?
We are testing the Town Hall feature of Teams in replacement of Live Events. We are GCC licensed and not finding the Q&A option — either before the meeting starts or after the meeting has started. The option is just not there. We have Q&A turned on in the meetings policy in Teams and it works when we are in Live Events. I found a post that said Viva Engage is needed and then I found another post that said GCC doesn’t have Q&A in Town Hall. Any idea?In addition, when we end a Town Hall by going to Leave/End Event, the attendee view shows a blank screen with text that reads something like “The host has turned off their camera”. The event doesn’t end until the organizer quits Teams. Any idea on this one? Read More
Best way to remove access to some SharePoint Online Site/Libraries from M365 Admin/Engineer
My organization of ~400 users uses M365 with SharePoint Online. We are hiring a new M365 Engineer/Admin who needs a lot of SPO (and other) Admin access to do all the things he’ll need to do.
My Dir of IT would like us to prevent this new engineer/admin for being able to access a couple SPO Sites like Finance and Exec Team and prevent access to a couple document libraries in our HR site.
What is the best way (or best practices) to set this up in Entra ID and/or M365? Oh, we are fully cloud-based, so no local Domain Controllers and servers.
Thanks!
Brian
My organization of ~400 users uses M365 with SharePoint Online. We are hiring a new M365 Engineer/Admin who needs a lot of SPO (and other) Admin access to do all the things he’ll need to do. My Dir of IT would like us to prevent this new engineer/admin for being able to access a couple SPO Sites like Finance and Exec Team and prevent access to a couple document libraries in our HR site. What is the best way (or best practices) to set this up in Entra ID and/or M365? Oh, we are fully cloud-based, so no local Domain Controllers and servers. Thanks!Brian Read More
Earn a free Microsoft Power Platform Community Conf Pass!
We are excited to announce the upcoming Power Platform Conference taking place September 18-20th in Las Vegas, where industry experts, thought leaders, and enthusiasts will come together to explore the latest trends, best practices, and success stories related to the Power Platform.
As a valued partner, we invite you to participate in this event as a Demand Generation Agent. By signing up, you’ll have the opportunity to engage with your customers and drive demand for the conference.
Here’s how it works:
Sign Up: Visit the following link to register as a Demand Gen Agent: Partner Demand Gen Registration Form.
Share the Unique Code: Upon completing the registration form, we will send you a unique registration code which will include a $100 discount. Share this code with your customers and encourage them to register for the conference using it. Note: the term for this promotion is June 1 – Aug 31, only customers who register with the unique code within this timeframe will be eligible. See attached terms.
Generate Demand: As your customers register using the unique code, you’ll contribute to the overall success of the conference. If 10 or more customer individuals register using your code, we’ll reward you with one (1) free conference pass!
Don’t miss this opportunity to connect with the Power Platform community, learn from experts, and expand your network. We look forward to having you on board!
Thank you for your continued partnership.
We are excited to announce the upcoming Power Platform Conference taking place September 18-20th in Las Vegas, where industry experts, thought leaders, and enthusiasts will come together to explore the latest trends, best practices, and success stories related to the Power Platform.
As a valued partner, we invite you to participate in this event as a Demand Generation Agent. By signing up, you’ll have the opportunity to engage with your customers and drive demand for the conference.
Here’s how it works:
Sign Up: Visit the following link to register as a Demand Gen Agent: Partner Demand Gen Registration Form.
Share the Unique Code: Upon completing the registration form, we will send you a unique registration code which will include a $100 discount. Share this code with your customers and encourage them to register for the conference using it. Note: the term for this promotion is June 1 – Aug 31, only customers who register with the unique code within this timeframe will be eligible. See attached terms.
Generate Demand: As your customers register using the unique code, you’ll contribute to the overall success of the conference. If 10 or more customer individuals register using your code, we’ll reward you with one (1) free conference pass!
Don’t miss this opportunity to connect with the Power Platform community, learn from experts, and expand your network. We look forward to having you on board!
Thank you for your continued partnership.
Read More
I can’t see my booking pages.
I hope you are very well and you can help me.
From nowhere I can no longer see the bookings I had created, it’s not something of access since I created them, I don’t know why I can’t see them anymore
I hope you are very well and you can help me.From nowhere I can no longer see the bookings I had created, it’s not something of access since I created them, I don’t know why I can’t see them anymore Read More
Using Excel Copilot to split columns
Hi Everyone, this is the first in a series of posts to show you some of the things that are possible to do with your workbooks using Copilot. Today I will start with this list of employees:
I would like to have the names in this list separated into 2 columns for the first and last names. To accomplish this, I’ll start by clicking on the copilot button on the right side of the Home tab, showing the copilot pane and type the prompt:
Split the name column into first and last name
Excel Copilot looks at the content in the list and then suggests inserting 2 new calculated column formulas to split the first and last names from the Name column.
Hovering the mouse cursor over the “Insert columns” button in the copilot pane shows a preview of what inserting the new column formulas will look like. From the preview, it looks like it is doing what I wanted.
Clicking on the Insert Columns button will accept the proposed change, inserting 2 new columns with calculated column formulas that split out the first and last names, giving me the result I was looking for!
Over the coming weeks I will be sharing more examples of what you can do with Copilot in Excel.
Thanks for reading,
Eric Patterson – Product Manager, Microsoft Excel
*Disclaimer: If you try these types of prompts and they do not work as expected, it is most likely due to our gradual feature rollout process. Please try again in a few weeks.
Hi Everyone, this is the first in a series of posts to show you some of the things that are possible to do with your workbooks using Copilot. Today I will start with this list of employees:
Table with these columns: Name Address City State. First two rows of data are: Claude Paulet 123 Main Avenue Bellevue Washington Jatindra Sanyal 1122 First Place Ln N Corona California
I would like to have the names in this list separated into 2 columns for the first and last names. To accomplish this, I’ll start by clicking on the copilot button on the right side of the Home tab, showing the copilot pane and type the prompt:
Split the name column into first and last name
Excel Copilot looks at the content in the list and then suggests inserting 2 new calculated column formulas to split the first and last names from the Name column.
Picture showing Excel Copilot pane containing this text:Looking at A1:D17, here are 2 formula columns to review and insert in Columns E and F: 1. First name Extracts the first name of each individual by taking the text before the first space in the “Name” column and removing any extra spaces. =TRIM(LEFT([@Name],FIND(” “,[@Name])-1)) Show explanation 2. Last name Extracts the last name of each individual by finding the space in their full name and taking the text that follows it. =TRIM(MID([@Name],FIND(” “,[@Name])+1,LEN([@Name])))
Hovering the mouse cursor over the “Insert columns” button in the copilot pane shows a preview of what inserting the new column formulas will look like. From the preview, it looks like it is doing what I wanted.
Picture of the list of employees with a preview of 2 new columns that would be added. First name column is being shown in column E and Last Name in column F.
Clicking on the Insert Columns button will accept the proposed change, inserting 2 new columns with calculated column formulas that split out the first and last names, giving me the result I was looking for!
Picture showing the Excel workbook with copilot pane open. Includes the employee table with 2 new columns added.
Over the coming weeks I will be sharing more examples of what you can do with Copilot in Excel.
Thanks for reading,
Eric Patterson – Product Manager, Microsoft Excel
*Disclaimer: If you try these types of prompts and they do not work as expected, it is most likely due to our gradual feature rollout process. Please try again in a few weeks. Read More
SAP & Teams Integration with Copilot Studio and Generative AI
SAP & Teams Integration with Copilot Studio and Generative AI
Introduction
In this blog, we provide a detailed guide on leveraging AI to optimize SAP workflows within Microsoft Teams. This solution is particularly advantageous for mobile users or those with limited SAP experience, enabling them to efficiently manage even complex, repetitive SAP tasks.
By integrating SAP with Microsoft Teams using Copilot Studio and Generative AI, we can significantly enhance productivity and streamline workflows. This blog will take you through the entire process of setting up a Copilot to interact with SAP data in Teams. We’ll utilize the Power Platform and SAP OData Connector to achieve this integration. By following along you will create and configure a Copilot, test and deploy it within Teams, enable Generative AI, build automation flows, and create adaptive cards for dynamic data representation and even change data in SAP.
1. Overview of the Solution
The solution consists of three main components: Copilot Studio, Power Automate Flow and SAP OData Connector.
Copilot Studio is a web-based tool that allows you to create and manage conversational AI agents, called Copilots, that can interact with users through Microsoft Teams.
Power Automate Flow is a is a tool that allows you to automate workflows between your applications.
SAP OData Connector is a custom connector that enables you to connect to SAP systems using the OData protocol.
The following diagram illustrates how these components work together to provide a seamless SAP and Teams integration experience.
2. Prerequisites
Before you start building your Copilot, you need to make sure that you have access to the Power Platform and to an SAP system. You can leverage the licenses and SAP systems that are available in your company, or alternatively you can use a trial license for the Power Platform and a public SAP demo system. The following links will guide you on how to obtain these resources if you don’t have them already.
2.1. Power Platform Access
Trial license: https://learn.microsoft.com/en-us/power-apps/maker/signup-for-powerapps
2.2. SAP System Access
Request SAP Gateway Demo System ES5 Login: https://developers.sap.com/tutorials/gateway-demo-signup.html
3. Create a Copilot
Now that you have an overview of the solution and have ensured you meet the prerequisites, it’s time to dive into the hands-on process of creating a Copilot. This section will guide you through the detailed steps to set up and configure your Copilot, enabling it to interact with SAP data within Microsoft Teams. You’ll learn how to leverage Power Automate Flow and the SAP OData Connector to build a robust automated workflow. By the end of this chapter, you will have a fully functional Copilot that can retrieve information about products from the SAP system.
3.1. Create a Copilot
Create a Copilot with the name “SAP Product Copilot”:
And enter the following details:
Activate “Generative AI” Feature in the Settings of the Copilot
3.2. Setup Flow + Connector
Create a new “Instant cloud flow” in Power Automate:
Provide the name and chose “Run a flow from Copilot” as trigger:
Add an input variable to the trigger action:
Add an SAP OData action. Choose Query OData entities:
Configure the connection:
OData Base URI: https://sapes5.sapdevcenter.com/sap/opu/odata/iwbep/GWSAMPLE_BASIC
Enter the OData Entity “Product Set”:
Presse “Show all” to enter a filter in the advanced parameters:
In $Filter you’ll enter a Power FX expression that will filter on the provided Category.
Add this expression:
concat(‘Category eq ‘, ””, triggerBody()[‘text’], ””)
Add a final action that will return the found products.
Find the required action by searching for Copilot
Add an output variable
In the output add a Power FX expression:
body(‘Query_OData_entities’)
Finally, the action should look like this:
3.3. Test the flow
Choose Manually and enter the category “Keyboards”:
The successful run will indicate that the flow and the connector work fine.
3.4. Connect the Copilot with the Flow.
Add an action:
Chose the previously created Flow:
Next:
Edit the Input:
And add the following text into the Description. This ensures Gen AI knows how to set the input for the flow:
Product Category. Only one single category can be chosen as input from this list. It is case-sensitive and must be written exactly like below:
Accessories,
Notebooks,
Laser Printers,
Mice,
Keyboards,
Mousepads,
Scanners,
Speakers,
Headsets,
Software,
PCs,
Smartphones,
Tablets,
Servers,
Projectors,
MP3 Players,
Camcorders
Edit the Description of the action output in the same way:
Products found in SAP of a given category.
Present the result as HTML table including following information: ProductID; Name; Category; Description; Supplier; Price; Currency.
3.5. Test the Copilot in the test pane
Open the Test pane and give it a try:
For the first test you need to connect with the SAP OData Connector:
Now you should get the response in the chat window:
3.6. Add the Copilot to Teams
Publish the Copilot first:
Then connect to Teams in the Channels Tab:
Open the Copilot in Teams
Finally test the Copilot in MS Teams
You have successfully built a Copilot that can retrieve up-to-date information of products stored in an SAP system and present it in a table format in Microsoft Teams.
4. Use Adaptive Cards to present SAP information
Now let’s move on by creating adaptive cards to display SAP data dynamically within Microsoft Teams. In this section you will
Create and configure topics.
Activate topics using the appropriate trigger phrases.
Call flows from within topics.
Utilize various entities of the SAP OData Connector.
Handle special situations, such as when no product is found.
Parse JSON data and assign values to topic variables.
Design and implement adaptive cards.
Understand the differences between actions and topics.
By mastering these skills, you will enhance the functionality and interactivity of your Copilot, providing users with a more intuitive and efficient way to interact with SAP data.
4.1. Create a Topic “SAP Product Data”
Create a new topic from blank
In the section “Describe what the topic does” Enter the following:
You can copy/paste this description:
This tool can handle queries like these:
sap product update.
Update SAP product.
Update SAP product data.
Edit product information in SAP.
Edit SAP product data.
Show SAP product details.
Edit SAP product information.
Save the Topic
Open the Topic Details and add the Description
Description: “Show and update information about a product in the SAP system.”
Create the Input variable
Save again.
Add a node that asks for the Product ID.
Question:
Which product do you want to update?
Please provide the Product ID.
Example: HT-1000.
In the Identify field choose “User’s entire response”:
Make sure the response will be saved in the variable ProductID:
Note: With GenAI feature enabled the question might not be asked when the ProductID is already known within the context of the conversation, which is very convenient.
Add a Message that will help us to verify if up to here the topic works as designed:
This message can be removed later when all is working fine.
4.2. Create a Flow to get SAP product details
Take the flow from Lab1 and make a copy
Refresh the page to see the new flow and then turn the flow on
Change the Filter in the new Flow
The new filter must be changed to filter on ProductID:
concat(‘ProductID eq ‘, ””, triggerBody()[‘text’], ””)
Then update the action and save the flow.
4.3. Call the Flow from the Topic
Add a node “Call Action” and choose the “List SAP product details” flow:
As Input provide the ProductID variable:
Save again.
4.4. Add error handling when no data is found
Add some error handling in case we got the ProductID wrong, and nothing was found in SAP:
Steps are:
Set a condition where Output is equal to “[]” which means no product was found and an empty JSON string was returned.
Send a message to inform the user: “The product with ID “ProductID” was not found.”
Add a route via “Topic Management” -> “Got to step” and select the destination step where the topic asks for the Product ID.
Save again.
4.5. “Parse value” the Flow Output
The flow returns a JSON elopement that contains all product details. These must be parsed and assigned to a Table variable.
Select the Output from the Flow as Input.
As data type pick “From sample data”
Get the sample data from a flow test run with a known Product ID
You can copy/paste the sample data from the successful flow run. Either get it in the output of respond to copilot or take it from the Odata query output
Enter the sample data to create the schema
Save the result into a new variable called Product.
4.6. Adaptive Card with SAP Data
Create a Send message node
Send the message as adaptive card
Enter the following draft adaptive card JSON to start with:
The card will look like this:
As a next step make the adaptive card dynamically showing the values from the Product Table. For this you must switch to Formula:
This will change the format slightly removing all the double quotes of the variable names.
Edit the last Entry from
“text”: “${Topic.ProductID}”
To:
text: Topic.ProductID
Save and test again. Now you have an adaptive card showing dynamically a value returned by the flow:
As a next step add the full code from the link below to show all the SAP information in the adaptive card:
Note: You can create your own design and adaptive cards JSON code here: https://adaptivecards.io/designer/
Save and run another test:
5. Changing the data within SAP
To advance the copilot capabilities even further, you can enable changing product information in SAP. Therefore, the following steps must be done:
Initiate actions from adaptive cards by using the “Ask with adaptive card” node.
Create another flow to update data and use the update entity in the SAP OData connector.
Configure more additional variables handling in the topic
Update the Copilot to use the “update” flow.
5.1. Add “Ask with adaptive card” Node
Add a node “Ask with adaptive card”
See also:
Ask with Adaptive Cards – Microsoft Copilot Studio | Microsoft Learn
Take the code from the previously created adaptive card.
Add the action “Submit” at the end of the AC code.
,
actions: [
{
type: “Action.Submit”,
title: “Submit Changes”,
horizontalAlignment: “Center”,
data: {
action: “submitProductChanges”
}
}
]
Save this adaptive card.
Delete the previously created AC as we don’t need it any longer.
When you get the error about missing properties, meaning the Adaptive Card editor did not automatically create the output structure you need to edit the schema manually.
Edit schema on the bottom right.
Enter the variable types into the schema binding:
kind: Record
properties:
action: String
actionSubmitId: String
currencyCode: String
description: String
price: Number
productName: String
Confirm this, save the topic and test again
5.2. Create “Update SAP Product Details” Flow
The final step is to run another flow that will update the product information in SAP.
Create another flow with the same copy procedure as before.
Refresh the page and “Turn on” the flow.
At second position add an action “SAP OData Connector” with the Entity “Update OData entity”.
Set the ProductID Input as shown:
In the advanced parameters mark those where we want to allow updates. These are Name, Description, Price and CurrencyCode:
Add the additionally required Flow Parameters. (Note: Price is a number):
Update the “Update Odata entity” Actions with the relevant variables in the corresponding fields:
Update the last action in the flow with the success message:
5.3. Update the Copilot Topic to call the update flow
After the adaptive card node add the node “Call an action” and pick the “Update SAP product detail” flow:
Open the “variable” pane and activate the required variables.
Fill in the variables in the respective input fields.
The last step is to send the message about the successful product information update:
Save and test in the copilot test pane.
When all works fine you can publish again and test the functionality in MS Teams. You’ll need to trigger the Copilot update in MS Teams with the “Start over” trigger phrase.
We hope you enjoyed following along this blog and that you will find it useful for your own SAP projects.
6. Conclusion
With the steps outlined in this blog, you are now equipped to fully leverage the integration of SAP with Microsoft Teams using Copilot Studio and Generative AI. You can use any available SAP OData service or even create your own, enabling seamless access and management of SAP data within Teams. This integration not only simplifies workflows but also transforms simple, repetitive tasks into significant value-adding activities for users.
You can enable even those users with little or no SAP know-how to complete SAP-specific tasks, thanks to the built-in Generative AI feature that can always help and answer every question. As you explore and implement these capabilities, you’ll discover new opportunities to enhance productivity and drive innovation in your SAP related digital workspace.
The future of integrated, AI-powered collaboration is here, and it hopefully starts with your next SAP and Teams integration scenario.
Microsoft Tech Community – Latest Blogs –Read More
Unlocking the Potential of Unstructured Data with Microsoft Copilot and Azure Native Qumulo
It has been a true pleasure to see our friends at Qumulo constantly innovating and delivering a service that adds more value with every release. In the post below, authored by Qumulo, you will learn more about the value of their NEW Copilot integration and how to implement it with Azure Native Qumulo. Enjoy!
Principal PM Manager, Azure Storage
Partner post from our friends at Qumulo
Learn more about Azure Native Qumulo here!
When working with unstructured data, employees may spend hours every week looking for information within internal data sources, performing analyses, writing reports, making presentations, finding and creating insights in dashboards, or customizing information for different clients and groups. Microsoft Copilot helps employees to be more creative, data-informed, efficient, ready, and productive when dealing with unstructured data.
Using Microsoft Copilot + the M365 suite of productivity products is seamless, but as organizations become more data-driven and lean into Copilot across their data estate, the need for Copilot integration with full featured file systems is more apparent than ever. Copilot can unlock the value contained in petabyte scale systems, delivering the latent intelligence already earned by an organizational dataset. Azure Native Qumulo (ANQ) Copilot Connector enables an organization to take full advantage of the data typically stored in enterprise scale file systems.
The Challenge: Deep Analysis of Unstructured Data
Using natural language to access the value of their file data is beneficial for customers in all industries. In fact, most data is stored in unstructured formats because files have been the standard structure for applications that do not rely solely on an underlying database. Microsoft Copilot can extract insights rapidly from both legacy documentation and modern application outputs by using a centralized unstructured data store like Azure Native Qumulo.
For example:
In the healthcare industry, medical professionals use imagery technology to aid the diagnostic process. In manycases, these images need to be retained for decades so that radiologists, cardiologists, etc., can perform patient studies across long time horizons. Hospitals with large networks must manage and search across petabytes of data generated from modern and legacy medical imaging applications.
The telecom industry leverages AI to boost network performance and find reliability faults within their infrastructure. This helps to automate operations and use data-driven insights for enhanced network coverage. Multiple petabytes of data is generated monthly, and resides in unstructured data storage. Understanding this data requires highly trained technicians to summarize findings.
AI-based self-driving cars have sensors, automotive analytics, and connections to cloud services. These cars use data analytics to make real-time decisions based on the data that it gathers from the in-car sensors. The storage capacity per vehicle could reach 11 terabytes by 2030 making centralized data stored in the 100’s of exabyte range. Summarizing patterns requires complex modeling and data science, but the insights from individual files is also valuable.
Energy companies optimize oil & gas production by forecasting future events and improving flow methods using AI. Combining these complex models with natural language prompts enable greater access across the business unit, increase the time to insights, and identifies patterns for demand forecasting, oil exploration, and predictive maintenance. These datasets often reach petabyte scale and include multiple different file types.
The above industries often struggle with analyzing billions of unstructured files due to limitations in existing tools:
Legacy Search Tools: Designed for structured data, these tools are inefficient for unstructured data and involve costly, time-consuming data migrations.
Open Source AI Solutions: These require sharing data with unregulated third-party companies, raising significant security concerns and potentially creating a risk of data leakage to competitors.
Manual/OCR Methods: Slow, inefficient, and often failing to provide new insights despite reformatting data.
Finding Insights with Microsoft Copilot
Microsoft Copilot combined with Azure Native Qumulo provides a seamless solution for reading and analyzing unstructured data in place, without requiring duplication or alteration of your data. Custom connectors allow Copilot to handle various file types, from PDFs and spreadsheets to text files. With this integration, Copilot helps integrate large data stores with the daily workflow of high value team members.
Flexibility and Scalability:
Customizable Connectors: Engineers can develop custom connectors for various file types, allowing Copilot to analyze virtually any data type stored in ANQ.
Petabyte-Scale Analysis: Microsoft Copilot is capable of handling extensive data sets, without the need for data movement or migration by using ANQ.
Security and Privacy:
Azure Tenant Integration: Copilot operates entirely within the secure confines of your Azure tenant, ensuring that data remains protected.
Exclusive Organizational Access: Analysis results are accessible only within your organization, maintaining strict data privacy and integrity.
User-Friendly Interface:
Seamless Integration: Works with the standard Copilot search interface, providing a familiar user experience within the applications already most used for daily productivity, e.g., Teams, Outlook, M365, etc.
Natural Language Processing (NLP): Users can submit and execute queries using plain language, making sophisticated data analysis accessible to non-technical staff.
Outcomes
In the industry scenarios we described earlier each customer benefits from giving more decision makers access to their data without requiring data scientists or highly trained personnel to serve as middlemen. Both new and legacy data become useful for comparisons and pattern analysis without spending costly hours trying to find the right files for analysis. Lastly, their Copilot implementation is no longer limited to OneDrive or SharePoint, and each business can take advantage of the data stored across the entire network attached storage estate.
The cost savings in terms of increased productivity are profound.
Getting Started
Qumulo has published their Azure Native Qumulo Copilot Connector on GitHub at: GitHub – Qumulo/QumuloCustomConnector
Launching Azure Native Qumulo in the Azure Portal takes 12-15 minutes, and the connector can be set up within an hour. You can begin securely interacting with data using natural language as soon as the connector is established.
Embrace the Future of Data Management
For technical teams ready to revolutionize their data management strategy, the integration of Microsoft Copilot with Azure Native Qumulo offers a cutting-edge solution. Unlock the full potential of your unstructured data and stay ahead in the data-driven world.
Explore the possibilities today with Microsoft Copilot and Azure Native Qumulo and transform your approach to data analysis and management.
Microsoft Tech Community – Latest Blogs –Read More
How Glint enriches the Microsoft Viva experience
In July 2023, Glint moved from its home at LinkedIn to join the Microsoft Viva products providing a comprehensive approach to improving employee engagement and performance for organizations. It’s been a year of incredible accomplishments that have supported Glint legacy customers – and new Viva customers – to achieve their business and people goals using a human-centric approach.
We are proud to have so many successes in our first year as Viva Glint:
Completing Microsoft compliance for our annual privacy review so our customers know that the security of their data is our number one priority.
Achieving among the highest level of product accessibility and inclusivity in our field.
Previewing Copilot in Viva Glint, our AI tool to drive meaningful action on employee feedback by quickly summarizing large quantities of survey comments.
Adding survey items to our taxonomy to measure the impact of Copilot and AI transformation as well as employee productivity in addition to employee engagement.
What’s to come for Viva Glint?
As you can see on our roadmap, there is much more to come in the next 12 months. Integrations with Microsoft 365 and other Viva apps have begun to preview with rave reviews and thoughtful feedback. Look ahead to these notable features:
Integrate Viva Glint with Viva Insights: View employee engagement survey scores alongside aggregated patterns of how people work to reveal new insights into your teams’ strengths and opportunities and drive meaningful improvements to the employee experience.
360 Feedback: Give employees a deeper understanding of their strengths and areas for development, from multiple viewpoints, leading to personal and business performance enhancements.
Teams Integration: Enhance notifications and Nudge capabilities by enabling easy communications in the daily flow of work.
Dozens of platform upgrades providing timesaving, self-serve experiences for admins and managers: Onboarding and Exit survey templates, Raw Data Export features, retroactive updates, and enhanced control over data – to name just a few!
Copilot, Copilot, Copilot! Use Viva Glint to understand and quantify your AI journey. We will continue to innovate the Copilot in Viva Glint comment summarization capabilities for improved AI-powered employee feedback and action taking experiences.
So many resources to take advantage of
We have the assets you need to support your Viva Glint journey! Thanks to our customers who prompted the debut of this extensive catalog of peer and expert forums and the training and guidance resources available to ensure you get every benefit from your Viva Glint programming:
Join the Viva Glint Product Council: Be part of sessions that bring customers directly into our product-building process. Sign up once for your organization and bring all your people who can be part of an insightful discussion!
Attend our Ask the Experts series: Attend live discussions with Viva Glint experts to learn best practices, engage in peer-to-peer learning, and get your questions answered about the Glint product.
Complete learning paths and earn Viva Glint badges: Use the Microsoft Learn training site to gain deeper expertise for using Glint programs. When completing a learning path, apply for digital badges to showcase your learning achievements.
Use Microsoft Learn to guide your Viva Glint journey: Find technical documentation and guidance to help you through key stages of your Viva Glint journey.
Sign up for our communications to learn about product releases and how to participate in product previews, and to register for upcoming events from the Viva team of experts.
Join our live thought leadership events. Check out our AI Empowerment series as well as our Think Like a People Science series to learn what is new from the Viva People Science research team about employee engagement, productivity, AI transformation, and more.
Join us at Microsoft Viva
For customers joining us from LinkedIn, we encourage you to move up your migration date to take advantage of Copilot in Viva Glint and all the other new and upcoming features mentioned above. Talk to your CxPM or if you’re not supported by a CxPM, reach out to VivaGlintMigration@microsoft.com to get your migration process started. Review the steps you’ll take: Learn about the licensing steps here so you can get started on your journey.
Microsoft Tech Community – Latest Blogs –Read More
Configuring auto_explain for Advanced Query Analysis
The auto_explain module in PostgreSQL is a powerful tool for diagnosing query performance issues by automatically logging execution plans of slow queries. Properly configuring auto_explain can significantly enhance your ability to troubleshoot and optimize complex queries and stored procedures. Here’s a detailed guide on configuring auto_explain effectively:
Note: Adding auto_explain to the shared_preload_libraries parameter requires a restart of the PostgreSQL server to take effect.
Customize behavior with the following settings to tailor the level of detail captured in the logs:
Purpose: This parameter controls the minimum duration a query must take to be logged by auto_explain.
Usage:
Setting this to a specific value (in milliseconds) means that only queries exceeding this duration will be logged. For instance, setting it to 100 will log only queries that take longer than 100 milliseconds.
If you set it to 0, every query will be logged, regardless of how quickly it executes. This is useful in a development environment where you want to capture all activity, but in a production environment, you may want to set a higher threshold to focus on slower queries.
Notes: The parameter only considers the time spent executing the query (ExecutorRun). It does not include the time taken for query planning or compilation. This means if a query is slow due to planning or compilation issues, those won’t be reflected in the auto_explain logs.
Purpose: When this is set to true, the module will run the query as EXPLAIN ANALYZE, which includes actual run-time statistics.
Details:
It provides detailed timing information for each stage of the query, including parsing, planning, and execution.
This helps in identifying where time is being spent within the query execution, offering deeper insights compared to a standard EXPLAIN that only shows the query plan without timing.
Consideration:
Resource Intensity: Enabling log_analyze can be resource-intensive. Since EXPLAIN ANALYZE requires the database to time each operation, it adds overhead to the execution of each query.
All Queries Executed with EXPLAIN ANALYZE: When log_analyze is enabled, PostgreSQL runs all queries with EXPLAIN ANALYZE, regardless of whether they meet the log_min_duration threshold. This is because the system cannot predict upfront how long a query will take. Only after the query completes does PostgreSQL compare its actual duration to the log_min_duration threshold. If the duration exceeds this threshold, the query, along with its runtime statistics, is logged. If not, the statistics are discarded, but the overhead of collecting them has already been incurred.
Overhead of Timing Operations: Collecting runtime statistics involves overhead, particularly due to system clock readings, which can vary depending on the system’s clock source. This can add significant resource consumption, especially on high-traffic systems.
Purpose: When enabled, this logs buffer usage during query execution.
Details:
This setting helps you understand how much data is being read from or written to disk buffers, which is crucial for analyzing I/O performance.
For example, if a query shows high buffer usage, it may indicate that the query is performing a lot of disk I/O, which could be a performance bottleneck, especially if the data is not cached in memory.
Purpose: Similar to log_analyze, this setting logs detailed timing information for each phase of the query.
Details:
It captures the time spent on different operations within the query execution, allowing you to pinpoint where delays are occurring.
This is particularly useful for complex queries where the execution time is not evenly distributed across different operations.
Considerations:
Performance Overhead: Detailed timing information introduces some performance overhead. The need to read the system clock frequently during query execution can affect overall query performance, particularly in high-throughput environments.
Increased Log Volume: Enabling this setting will generate more log data, which can impact log management and storage. This is especially relevant for production systems where log volume needs to be carefully managed.
Environment Suitability: It’s best to use this feature in development or testing environments rather than production, where the overhead and increased log volume might be less acceptable.
Purpose: This option logs details about triggers fired during query execution.
Details:
Triggers can have a significant impact on performance, especially if they involve complex operations or if they are fired frequently.
Logging trigger activity can help you understand their role in query performance, making it easier to optimize both the query and the associated triggers.
Purpose: When set to true, this provides a more detailed output of the execution plan.
Details:
The verbose output includes additional information such as join types, exact row estimates, and other details that might be omitted in a standard EXPLAIN output.
This level of detail is particularly useful for diagnosing complex queries where understanding the exact nature of the execution plan is critical.
Purpose: This parameter logs information about Write-Ahead Logging (WAL) during query execution.
Details:
Captures details on WAL activity, which can be useful for understanding how much data is being written to the WAL and how it impacts performance.
Helps in diagnosing performance issues related to write operations and understanding the impact of transactions on WAL.
Purpose: Logs the settings of the auto_explain module for each query.
Details:
This parameter helps you track the configuration used for each query, making it easier to understand and reproduce the conditions under which specific performance characteristics were observed.
Useful for debugging issues and ensuring that the correct settings are applied consistently.
Example: Optimizing Query Performance Using auto_explain in PostgreSQL
Prerequisites
Log Analytics Workspace: Ensure you have a Log Analytics workspace created. If not, create one in the same region as your PostgreSQL server to minimize latency and costs.
Permissions: Ensure you have the necessary permissions to configure diagnostic settings for the PostgreSQL server and access the Log Analytics workspace.
To create a Log Analytics Workspace in Azure, you can refer to the official Microsoft documentation. Here’s a direct link to the guide that walks you through the process: Create a Log Analytics Workspace
Configure Diagnostic settings:
Navigate to Your PostgreSQL Server
In the left-hand menu, select All services.
Search for and select Azure Database for PostgreSQL servers.
Choose the PostgreSQL server you want to configure.
Configure Diagnostic Settings
In the PostgreSQL server blade, scroll down to the Monitoring section.
Click on Diagnostic settings.
Add a Diagnostic Setting
Click on + Add diagnostic setting.
Provide a name for your diagnostic setting.
Select Logs and Metrics
In the Diagnostic settings pane, you’ll see options to configure logs and metrics.
Logs: Check the logs you want to collect. Typical options include:
PostgreSQLLogs (server logs)
ErrorLogs (error logs)
Metrics: Check the metrics you want to collect if needed.
Send to Log Analytics
Under Destination details, select Send to Log Analytics.
Choose your Log Analytics workspace from the drop-down menu. Make sure it is in the same region as your PostgreSQL server.
If you don’t see your workspace, ensure it’s in the same region and refresh the list.
Review and Save
Review your settings.
Click Save to apply the diagnostic settings.
Download Server Log for offline analysis
— Create the stores table
CREATE TABLE IF NOT EXISTS stores (
id SMALLINT PRIMARY KEY,
name VARCHAR(50) NOT NULL,
location VARCHAR(100)
);
— Create the sales_stats table
CREATE TABLE IF NOT EXISTS sales_stats (
store_id INTEGER NOT NULL,
store_name TEXT COLLATE pg_catalog.”default” NOT NULL,
sales_time TIMESTAMP WITHOUT TIME ZONE NOT NULL,
daily_sales DOUBLE PRECISION,
customer_count INTEGER,
average_transaction_value DOUBLE PRECISION,
PRIMARY KEY (store_id, sales_time)
);
— Insert sample data into stores table
INSERT INTO stores (id, name, location)
VALUES
(1, ‘Main Street Store’, ‘123 Main St’),
(2, ‘Mall Outlet’, ‘456 Mall Rd’),
(3, ‘Downtown Boutique’, ‘789 Downtown Ave’);
— Create or replace the procedure to generate sales statistics
CREATE OR REPLACE PROCEDURE public.generate_sales_stats(
IN start_date TIMESTAMP WITHOUT TIME ZONE,
IN end_date TIMESTAMP WITHOUT TIME ZONE
)
LANGUAGE plpgsql
AS $$
BEGIN
INSERT INTO sales_stats (store_id, store_name, sales_time, daily_sales, customer_count, average_transaction_value)
SELECT
stores.id AS store_id,
stores.name AS store_name,
s1.time AS sales_time,
random() * 10000 AS daily_sales,
(random() * 200 + 50)::INTEGER AS customer_count,
random() * 200 + 20 AS average_transaction_value
FROM generate_series(
start_date,
end_date,
INTERVAL ’50 second’
) AS s1(time)
CROSS JOIN (
SELECT
id,
name
FROM stores
) stores
ORDER BY
stores.id,
s1.time;
END;
$$;
Configure following Server Parameter
auto_explain.log_analyze = ON
auto_explain.log_buffers = ON
auto_explain.log_min_duration = 5000
auto_explain.log_verbose = ON
auto_explain.log_wal = ON
auto_explain.log_timing = ON
Execute Procedure
— Execute the procedure to generate sales statistics
CALL generate_sales_stats(‘2010-01-01 00:00:00’, ‘2024-01-01 23:59:59’);
Access Logs
In the PostgreSQL server blade, look for the Monitoring section in the left-hand menu.
Click on Logs to open the Log Analytics workspace associated with your server. This may redirect you to the Log Analytics workspace where logs are stored.
Run KQL Query
— Filter execution plans for queries taking more than 10 seconds
AzureDiagnostics
| where Message contains ” plan”
| extend DurationMs = todouble(extract(“duration: (\d+\.\d+) ms”, 1, Message))
| where isnotnull(DurationMs) and DurationMs > 10000
| project TimeGenerated, Message, DurationMs
| order by DurationMs desc
— Filter logs for a specific instance using the LogicalServerName_s
AzureDiagnostics
| where LogicalServerName_s == “myServerInstance” // Filter for the specific instance
| where Message contains ” plan”
| extend DurationMs = todouble(extract(“duration: (\d+\.\d+) ms”, 1, Message))
| where isnotnull(DurationMs) and DurationMs > 10000
| project TimeGenerated, Message, DurationMs
| order by DurationMs desc
— Queries with Sort nodes that have used disk-based sorting methods, where the actual total execution time is over 70% of the total statement time, and that have read or written at least X temporary blocks.
AzureDiagnostics
| where Message contains “execution plan” // Filter for logs containing execution plans
| extend
SortMethod = extract(“Sort Method: (\w+)”, 1, Message), // Extract the sort method
TotalExecutionTimeMs = todouble(extract(“total execution time: (\d+\.\d+) ms”, 1, Message)), // Extract total execution time
DiskSortTimeMs = todouble(extract(“disk sort time: (\d+\.\d+) ms”, 1, Message)), // Extract disk-based sort time
TmpBlksRead = todouble(extract(“tmp_blks_read: (\d+)”, 1, Message)), // Extract temporary blocks read
TmpBlksWritten = todouble(extract(“tmp_blks_written: (\d+)”, 1, Message)) // Extract temporary blocks written
| where
isnotnull(TotalExecutionTimeMs) and
isnotnull(DiskSortTimeMs) and
DiskSortTimeMs > 0 and // Ensure disk-based sorting occurred
(DiskSortTimeMs / TotalExecutionTimeMs) > 0.70 and // Check if disk sort time is over 70% of total execution time
(TmpBlksRead > X or TmpBlksWritten > X) // Replace X with the threshold value for temporary blocks
| project
TimeGenerated,
Message,
SortMethod,
TotalExecutionTimeMs,
DiskSortTimeMs,
TmpBlksRead,
TmpBlksWritten
| order by TotalExecutionTimeMs desc
This KQL query filters logs to find entries containing query execution plans, extracts and converts the duration of these queries to milliseconds, and then focuses on those that took longer than 10 seconds. It displays the timestamp, the log message, and the execution duration. For more details on the AzureDiagnostics table, refer to Azure Monitor documentation.
Here is a detailed execution plan extracted from the log analytics workspace:
2024-08-08 20:14:51 UTC-66b52434.3e45-LOG: duration: 194804.905 ms plan:
Query Text: INSERT INTO miner_stats (miner_id, miner_name, stime, cpu_usage, average_mhs, temperature, fan_speed)
SELECT
miners.id AS miner_id,
miners.name AS miner_name,
s1.time AS stime,
random() * 100 AS cpu_usage,
(random() * 4 + 26) * miners.graphic_cards AS average_mhs,
random() * 40 + 50 AS temperature,
random() * 100 AS fan_speed
FROM generate_series(
‘2000-10-14 00:00:00’,
‘2024-08-30 23:59:59’,
INTERVAL ’59 second’) AS s1(time)
CROSS JOIN (
SELECT
id,
name,
graphic_cards
FROM miners
) miners
ORDER BY
miners.id,
s1.time;
Insert on public.miner_stats (cost=153024.30..204774.30 rows=0 width=0) (actual time=194804.893..194804.897 rows=0 loops=1)
Buffers: shared hit=39107638 read=97 dirtied=395027 written=399443, temp read=496743 written=497630
WAL: records=38317668 fpi=1 bytes=4253262922
-> Subquery Scan on “*SELECT*” (cost=153024.30..204774.30 rows=900000 width=76) (actual time=75088.036..113372.899 rows=38317668 loops=1)
Output: “*SELECT*”.miner_id, “*SELECT*”.miner_name, “*SELECT*”.stime, “*SELECT*”.cpu_usage, “*SELECT*”.average_mhs, “*SELECT*”.temperature, “*SELECT*”.fan_speed
Buffers: shared hit=2, temp read=496743 written=497630
-> Result (cost=153024.30..191274.30 rows=900000 width=100) (actual time=75088.030..103714.013 rows=38317668 loops=1)
Output: miners.id, miners.name, s1.”time”, (random() * ‘100’::double precision), (((random() * ‘4’::double precision) + ’26’::double precision) * (miners.graphic_cards)::double precision), ((random() * ’40’::double precision) + ’50’::double precision), (random() * ‘100’::double precision)
Buffers: shared hit=2, temp read=496743 written=497630
-> Sort (cost=153024.30..155274.30 rows=900000 width=70) (actual time=75088.021..90570.249 rows=38317668 loops=1)
Output: miners.id, miners.name, s1.”time”, miners.graphic_cards
Sort Key: miners.id, s1.”time”
Sort Method: external merge Disk: 1249832kB
Buffers: shared hit=2, temp read=496743 written=497630
-> Nested Loop (cost=0.00..11281.25 rows=900000 width=70) (actual time=1621.592..12365.782 rows=38317668 loops=1)
Output: miners.id, miners.name, s1.”time”, miners.graphic_cards
Buffers: shared hit=2, temp read=28065 written=28065
-> Function Scan on pg_catalog.generate_series s1 (cost=0.00..10.00 rows=1000 width=8) (actual time=1621.564..2913.084 rows=12772556 loops=1)
Output: s1.”time”
Function Call: generate_series(‘2000-10-14 00:00:00+00’::timestamp with time zone, ‘2024-08-30 23:59:59+00′::timestamp with time zone, ’00:00:59’::interval)
Buffers: shared hit=1, temp read=28065 written=28065
-> Materialize (cost=0.00..23.50 rows=900 width=62) (actual time=0.000..0.000 rows=3 loops=12772556)
Output: miners.id, miners.name, miners.graphic_cards
Buffers: shared hit=1
-> Seq Scan on public.miners (cost=0.00..19.00 rows=900 width=62) (actual time=0.016..0.017 rows=3 loops=1)
Output: miners.id, miners.name, miners.graphic_cards
Buffers: shared hit=1
Lack of Support for Cancelled Queries:
One notable limitation of auto_explain is that it does not capture queries that are cancelled before they complete. Here’s what that means:
What It Logs: auto_explain is designed to log the execution plans and performance statistics of queries that finish running and meet specific criteria, such as a minimum execution time.
What It Misses: If a query is interrupted or cancelled—whether due to a timeout, user intervention, or some other reason—it won’t be logged by auto_explain. This means you won’t have detailed information about queries that were terminated prematurely, which can be crucial for diagnosing issues.
Conclusion
Debugging and optimizing stored procedures in PostgreSQL, particularly with complex queries and large datasets, can be a daunting task. However, using tools like the auto_explain module significantly enhances the ability to diagnose performance issues by automatically logging detailed execution plans and runtime statistics. Configuring auto_explain to capture various metrics, such as query duration, buffer usage, and WAL activity, provides deep insights into the performance characteristics of queries.
By combining these detailed logs with additional strategies such as indexing, memory tuning, and using third-party monitoring tools, developers can effectively pinpoint and resolve performance bottlenecks. This methodical approach not only aids in optimizing stored procedures but also in maintaining overall database performance.
For additional insights on optimizing PostgreSQL performance, you can refer to our previous blog on optimizing query performance with work_mem here.
Microsoft Tech Community – Latest Blogs –Read More
How can I write data repeatedly to an Excel file on a network drive from MATLAB R2024a?
I have a MATLAB script where I am using repeated calls to "writematrix" to write data to an Excel file on a network drive. The script will sometimes run successfully, but sometimes it fails at various lines with the following error.
Unable to write to file ‘…’ You might not have write permissions or the file might be open by another application.
I am looking for a consistent approach to write all of my data to this Excel file.I have a MATLAB script where I am using repeated calls to "writematrix" to write data to an Excel file on a network drive. The script will sometimes run successfully, but sometimes it fails at various lines with the following error.
Unable to write to file ‘…’ You might not have write permissions or the file might be open by another application.
I am looking for a consistent approach to write all of my data to this Excel file. I have a MATLAB script where I am using repeated calls to "writematrix" to write data to an Excel file on a network drive. The script will sometimes run successfully, but sometimes it fails at various lines with the following error.
Unable to write to file ‘…’ You might not have write permissions or the file might be open by another application.
I am looking for a consistent approach to write all of my data to this Excel file. writematrix MATLAB Answers — New Questions
Why is my GPU not detected by Parallel Computing Toolbox?
When I execute "gpuDevice" command in MATLAB command prompt on a machine with a supported GPU installed, I get the following error message:
ERROR: >> gpuDeviceError using gpuDevice (line 26)
No supported GPU device was found on this computer. To learn more about supported GPU devices, see
www.mathworks.com/gpudevice.When I execute "gpuDevice" command in MATLAB command prompt on a machine with a supported GPU installed, I get the following error message:
ERROR: >> gpuDeviceError using gpuDevice (line 26)
No supported GPU device was found on this computer. To learn more about supported GPU devices, see
www.mathworks.com/gpudevice. When I execute "gpuDevice" command in MATLAB command prompt on a machine with a supported GPU installed, I get the following error message:
ERROR: >> gpuDeviceError using gpuDevice (line 26)
No supported GPU device was found on this computer. To learn more about supported GPU devices, see
www.mathworks.com/gpudevice. gpu, nvidia, gpudevice MATLAB Answers — New Questions
Error while using sharepoint migration tool
Hi All,
I’m getting the above error while migrating the files from one drive to SharePoint site, could someone please help.
Hi All, I’m getting the above error while migrating the files from one drive to SharePoint site, could someone please help. Read More
Edit Permissions for Calendar App only – Read Permissions for other site pages
I have a SharePoint site that 2 users need to be able to only edit the Calendar App. The users DO NOT need to edit other site pages. I’ve looked around and I haven’t found any information on how to do this. SOS
I have a SharePoint site that 2 users need to be able to only edit the Calendar App. The users DO NOT need to edit other site pages. I’ve looked around and I haven’t found any information on how to do this. SOS Read More
Invites from Outlook have no Teams toggle
Dear Community,
I have tried to find a solution for quite a while, but without succes. I have a business account for Microsoft 365. When I start a new meeting, however, there is no toggle to include as a Teams meeting.
I am allowed to send Teams meeting through Teams and the add-in for Teams in Outlook is also switched on. Finally, outlook.office.com also allows me to send Teams invites.
Anyone has a clue how to fix this?
Thanks in advance!
Best, Gijs
Dear Community, I have tried to find a solution for quite a while, but without succes. I have a business account for Microsoft 365. When I start a new meeting, however, there is no toggle to include as a Teams meeting. I am allowed to send Teams meeting through Teams and the add-in for Teams in Outlook is also switched on. Finally, outlook.office.com also allows me to send Teams invites. Anyone has a clue how to fix this?Thanks in advance!Best, Gijs Read More
2 Conditional Formatting in 1 cell
Hi!
How can I apply 2 conditional formatting to 1 cell at the same time?
I mean if they both are overlapping in 1 cell i want it to show the color and the border line?
Hi! How can I apply 2 conditional formatting to 1 cell at the same time?I mean if they both are overlapping in 1 cell i want it to show the color and the border line? Read More
Open email links in Outlook desktop client
I’m playing around with CoPilot in Teams. When results contain an email link, they open up in the Edge browser. I’d like to change that to open up in the Outlook desktop client. I found the setting to open up Excel, Word and Powerpoint in client, but not Outlook. Is there a way to do this?
I’m playing around with CoPilot in Teams. When results contain an email link, they open up in the Edge browser. I’d like to change that to open up in the Outlook desktop client. I found the setting to open up Excel, Word and Powerpoint in client, but not Outlook. Is there a way to do this? Read More
Starting an AI/ML Community Among MLSA Members in Microsoft Support Community.
As members of the Microsoft Learn Student Ambassadors (MLSA) program, we are planning to create an AI/ML community among us. Our goal is to collaborate, share knowledge, and work on AI/ML projects together. To make this initiative successful, we would greatly appreciate any support and advice from Microsoft, especially in terms of resources, guidance, and best practices. We believe that with the right support, we can build a strong and impactful community.
As members of the Microsoft Learn Student Ambassadors (MLSA) program, we are planning to create an AI/ML community among us. Our goal is to collaborate, share knowledge, and work on AI/ML projects together. To make this initiative successful, we would greatly appreciate any support and advice from Microsoft, especially in terms of resources, guidance, and best practices. We believe that with the right support, we can build a strong and impactful community. Read More