Category: Microsoft
Category Archives: Microsoft
Copilot In Teams Meetings – HLS Copilot Snacks
Copilot in Microsoft 365 is a smart assistant that helps you get the most out of your meetings. Whether you are attending, running late, or unable to join a Teams meeting, Copilot can help you stay on top of the agenda, action items, and decisions. Copilot can transcribe the conversation, highlight the key points, and provide you a summary afterwards. You can also ask Copilot to send a message to the meeting organizer or participants, or to reschedule the meeting if you have a conflict. With Copilot, you can make sure you never miss a beat in your Teams meetings.
In this Copilot Sack I demonstrate the use of Copilot in Teams meetings.
To see all HLS Copilot Snacks video click here.
Resources:
Copilot in Microsoft Teams help & learning (cloud.microsoft)
Get started with Copilot in Microsoft Teams meetings – Microsoft Support
Microsoft Copilot for Microsoft 365 documentation | Microsoft Learn
To see all HLS Copilot Snacks video click here.
Thanks for visiting – Michael Gannotti LinkedIn
Microsoft Tech Community – Latest Blogs –Read More
Deleting Primary Outlook Email
I’m at a loss with how unintuitive this is after searching on my own all day yesterday and finding uncountable conflicting answers, so I decided to come to the experts.
I am a BSA and have a “dummy” Outlook.com email address I set up to test some mail features back when we were using Google Workspace. We’ve now migrated to Outlook and I need to find a way to make my *real* email address primary. I can set it to “primary” in Outlook but that only seems to have anything to do with sending and replying. When I add calendar invites, it still goes to my dummy email’s calendar.
I can’t simply delete the Outlook.com address because it’s primary and Outlook hates that idea. I followed its suggestion of creating a new data file and copying the dummy data into it so I could delete the address, but it has been 3 hours, and it says it’s still migrating data. There’s almost nothing in that email because it’s never been used, so I don’t know what it could possibly still be copying. I even went into my dummy email address’ data file and deleted everything I could to make it “lighter”.
I know complaining about how ridiculous Outlook is in 2024 is an exercise in futility but if you’ll allow me to rant for just a couple sentences to get it off my chest, this whole situation only exists because for some Satanic reason, Outlook (New) and the Mail app don’t accept .ics files.
Any help would be appreciated. If I could just completely wipe my Outlook information and start from scratch, that would be ideal.
I’m at a loss with how unintuitive this is after searching on my own all day yesterday and finding uncountable conflicting answers, so I decided to come to the experts. I am a BSA and have a “dummy” Outlook.com email address I set up to test some mail features back when we were using Google Workspace. We’ve now migrated to Outlook and I need to find a way to make my *real* email address primary. I can set it to “primary” in Outlook but that only seems to have anything to do with sending and replying. When I add calendar invites, it still goes to my dummy email’s calendar. I can’t simply delete the Outlook.com address because it’s primary and Outlook hates that idea. I followed its suggestion of creating a new data file and copying the dummy data into it so I could delete the address, but it has been 3 hours, and it says it’s still migrating data. There’s almost nothing in that email because it’s never been used, so I don’t know what it could possibly still be copying. I even went into my dummy email address’ data file and deleted everything I could to make it “lighter”. I know complaining about how ridiculous Outlook is in 2024 is an exercise in futility but if you’ll allow me to rant for just a couple sentences to get it off my chest, this whole situation only exists because for some Satanic reason, Outlook (New) and the Mail app don’t accept .ics files. Any help would be appreciated. If I could just completely wipe my Outlook information and start from scratch, that would be ideal. Read More
Live preview stalled duo to low bandwidth or loss of connectivity.
Hi all
I hope this post can help your guys having the same problem as me.
Basically, I want to share my view with Hololens 2 to a TV while I make a guides. Being more specifical, the TV is a Samsung “The Frame”. So, I using the web device portal for Hololens 2 in my laptop, from the laptop to TV a HDMI cable for share the screen.
The problem problem is: Are actually many issues with the “live preview” feature from device portal. When the live starts (doesn’t matter if is 720p or 480p) it works fine, as soon as I open any guides training, the live mode begin to have lagging trouble.
Error description: Live preview stalled duo to low bandwidth or loss of connectivity. If this problem persists, consider using a lower quality stream.
Hi allI hope this post can help your guys having the same problem as me.Basically, I want to share my view with Hololens 2 to a TV while I make a guides. Being more specifical, the TV is a Samsung “The Frame”. So, I using the web device portal for Hololens 2 in my laptop, from the laptop to TV a HDMI cable for share the screen.The problem problem is: Are actually many issues with the “live preview” feature from device portal. When the live starts (doesn’t matter if is 720p or 480p) it works fine, as soon as I open any guides training, the live mode begin to have lagging trouble.Error description: Live preview stalled duo to low bandwidth or loss of connectivity. If this problem persists, consider using a lower quality stream. Read More
Public Preview: Log Analytics Workspace Replication
Azure Monitor Log Analytics uses workspaces as a logical container for logs. Workspaces are region-bound, but workspace replication allows you to create cross-regional redundancy to increase workspace resilience to regional incidents.
What is Workspace Replication?
Workspace replication creates a replica of your workspace on another region, that you chose from a set of regions. The original instance of your workspace is referred to as the primary workspace, and the replica on the second region is referred to as secondary.
The second instance of your workspace is created by the service with the same ID and configuration as your primary workspace (future configuration changes you make will be synced as well). This is basically an active-passive setup – at any given time, your workspace has one active instance, and another one that is updated in the background and can’t be directly managed or accessed.
The secondary instance of your workspace is created empty, logs that were ingested to your workspace before enabling replication are not copied over. When replication is enabled, new logs ingested to your workspace are replicated, so they are sent to both primary and secondary workspaces. This means your workspace has cross-regional redundancy.
If an incident impacts your primary workspace, causing issues like ingestion latency or query failures, you can trigger failover, to switch to your secondary workspace, which can allow you to continue monitoring your resources and apps as needed. By the time you switch to your secondary workspace, it will hold logs ingested since the time you enabled replication, so you can continue using alerts, workbooks and even Sentinel or other services that query your logs.
When the outage is mitigated and your primary workspace is healthy again, you can switch back to your primary region.
Note that replication isn’t free of charge, but is much more affordable than dual homing (ingesting to two workspaces places on different regions) and is easier to manage and maintain. When you enable replication, your logs are effectively ingested to 2 different regions, and billing is done per replicated GB. You can apply replication to a subset of your Data Collection Rules (DCRs) to control the replication volume, and related costs. See the Azure Monitor pricing page for more information.
What Workspace replication is not
Workspace replication is not a mechanism to copy a workspace and its content to another region, or move it.
Logs that were ingested to your primary workspace before enabling replication aren’t copied over
Your secondary workspace can’t exist if the primary is deleted.
When switching to the secondary workspace, you can’t change workspace settings, including its schema (add tables or columns). These operations can only be done on the primary workspace.
Why not use availability zones instead?
Availability zones provide redundancy of your workspace infrastructure across zones in a single region, and this is always recommended. Workspace replication doesn’t replace availability zones, it works differently, as it creates a replica of your workspace and new incoming logs on another region. This is valuable because:
Not all regions support availability zones. If your zone doesn’t have availability zones or the Azure Log Analytics service doesn’t yet use availability zones in your region – Workspace replication is the best way to create redundancy for your workspace operation.
Some customers require protection against incidents impacting the entire region. Availability zones are zones inside the same region. Yet, if an issue (even a bug) impacts the entire region, switching zones will not help.
Frequently asked questions
Who triggers the region switching? Is it done automatically?
Switching between regions isn’t done by Azure Monitor, and can only be triggered by you. This is because different incidents impact different workspaces to different degrees, and only you can decide when it’s time to switch over. For example, a 2-minute latency in ingestion of a specific data type may be a minor issue for some customers, but very significant to others.
You can create alert rules that will automatically switch regions according to ingestion latency, query success rate or other health measurements. Yet, we recommend that alerts be notify someone that will evaluate the situation and make an informed decision.
Do I need to reconfigure all my clients to support this?
No, you don’t need to reconfigure anything. The DNS will reroute all requests sent to the workspace to instead reach the secondary workspace.
Does it replicate the workspace with my data?
No. Only logs ingested after you enable replication will be replicated to your secondary workspace. Logs ingested before your enabled replication are not copied over.
But what if my workspace is linked to a dedicated cluster?
Cluster replication will be supported soon. We’ll share an update when this capability becomes available.
What about Sentinel, LogicApp and other services that use my workspace? Will they break when I switch regions?
Services and features that use your workspace continue working against the secondary workspace, seamlessly. Note switching regions allows your workspace operation to carry on, but it doesn’t handle other components of these services.
For more information, see the Workspace Replication documentation.
Microsoft Tech Community – Latest Blogs –Read More
Evaluating Large and Small Language Models on Custom Data Using Azure Prompt Flow
The Evolution of AI and the Challenge of Model Selection
In recent years, the field of Artificial Intelligence (AI) has witnessed remarkable advancements, leading to the unprecedented surge in the development of small and large language models. They’re at the heart of various applications, aiding in everything from customer service chatbots to content creation and software development. These models offer developers a plethora of options, catering to a wide array of applications. However, this abundance also introduces complexity in choosing the right model that not only delivers optimal performance on specific datasets but also aligns with business objectives such as cost efficiency, low latency, and content constraints.
The Imperative for a Robust Evaluation Pipeline
Given the diversity in language models, it’s crucial to establish an evaluation pipeline that objectively assesses each model’s efficacy on custom data. This pipeline not only aids in discerning the performance differentials between large and small language models (LLMs and SLMs) but also ensures that the selected model meets the predefined business and technical thresholds.
Pre-requisites:-
Create an AML Workspace. Tutorial: Create workspace resources – Azure Machine Learning | Microsoft Learn
Knowledge base: Ensure documents are chunked, indexed and searchable. Here is a sample notebook to create a knowledge base using AI Search
Create an evaluation dataset: Use a JSON, CSV, or TSV file containing questions and ground truth
Step-by-step implementation details to build an Evaluation Pipeline using Azure Machine Learning (AML) and Prompt Flow (PF)
For this demonstration, we will use the phi-3-mini-4k-instruct model. The approach outlined here is adaptable and can be applied to other models as well including fine-tuned models.
Deploying the Model to an Azure Managed Online Endpoint
Deploying your language model to an Online Endpoint is a critical step to make models available for inference in a scalable and secured manner and is the first step in setting up a robust evaluation pipeline. Azure Managed Online Endpoints provides a secure, scalable, and efficient way to deploy and manage your AI models in production. The Model Catalog within AML Studio offers a host of LLM and SLM models from Meta, Nvidia, Cohere, Databricks and a wide range of other providers. These models are available in MLFlow format with one-click deployment option both Model As a Platform(MaaP) and Model as a Service (MaaS) while also providing seamless deployment options for hosting your own custom models or bringing in from HF. Refer the azure documentation with step-by-step instructions on how to deploy a model to Managed Online Endpoint.
Here we will deploy phi3-4k-instruct-model from Model Catalog on a Managed Online Endpoint.
Note:-
Configure parameters like max_concurrent_requests_per_instance and request_timeout_ms correctly to avoid errors (429 Too Many Requests and 408 Request Timeout) and maintain acceptable latency levels. Refer the detailed guidance here.
Sample code snippet:-
Introduction to Prompt Flow and Batch Evaluation
Microsoft Azure’s Prompt Flow, a powerful tool within Azure Machine Learning, is designed to streamline the creation and management of machine learning models. It provides an intuitive interface that guides users through the model creation process, from data ingestion and preprocessing to training and deployment. By orchestrating executable flows with LLMs, prompts and Python tools through a visualized graph, it simplifies the testing, debugging and evaluation of different prompt variants simplifying the prompt engineering task. Prompt Flow (PF) empowers developers and data scientists to focus more on strategic tasks and less on operational complexities. This tool is particularly useful for teams looking to accelerate their machine learning and gen ai lifecycle and deploy scalable models efficiently.
Prompt Flow offers a suite of prebuilt evaluation metrics tailored for GenAI based models, including Groundedness, Relevance, Coherence, Fluency, and GPT-based ranking on reference data, alongside traditional metrics like the F1 score. These metrics provide a comprehensive framework for assessing the performance of LLMs, SLMs, and Azure OpenAI models across various dimensions. It provides a seamless way to extend this list and add custom metrics like Bleu score, Rouge score, precision, recall and others. This flexibility allows users to extend the evaluation process by incorporating unique metrics that cater to specific business requirements or research objectives, thereby enhancing the robustness and relevance of model evaluations. Below are some of the built-in metrics available within PF:
Setting Up the Prompt Flow Evaluation Pipeline
Step 1: Create connections for AOAI, Embedding models and custom model. Also, create connection to the Knowledge base.
For custom and open-source models, establish a custom connection by providing details about the inference server endpoint, key, and deployment name.
For Knowledge base, AI Search service from Azure provides secure information retrieval at scale over user-owned content in traditional and generative AI search applications. Chunk and index your documents using AI Search and use the built-in Azure AI Search connector within PF to establish a connection to the knowledge base.
Step 2: Utilize Multi-Turn Q&A Flow from PF gallery
PF gallery provides a range of pre-built flows that can be cloned and customized or one can also build it from scratch. We will clone the multi-turn Q&A flow available within the gallery for a streamlined setup.
Step 3: Runtime: Select existing runtime or create a new one
Before you start authoring, you should first select a runtime. Runtime serves as the compute resource required to run the prompt flow, which includes a Docker image that contains all necessary dependency packages. It’s a must-have for flow execution.
You can select an existing runtime from the dropdown or select the Add runtime button. This will open up a Runtime creation wizard. Select an existing compute instance from the dropdown or create a new one. After this, you will have to select an environment to create the runtime. We will be using default environment to get started quickly.
Step 4: Map the input and output of each node
Ensure each node points to the right indexes, AOAI, and custom connections.
Step 5: Modify the prompt variant to set the bot tone, personality, postprocessing of retrieved context and any additional formatting if required.
Sample Prompt:-
You are an AI assistant that helps users answer questions given a specific context. You will be given a context and asked a question based on that context. If the information is not present in the context or if you don\’t know the answer, simply respond by saying that I don\’t know the answer, please try asking a different question. Your answer should be as precise as possible and should only come from the context.
Context : {{context}}
Question : {{question}}
AI :
Step 6: Add Python Tool as a new node and replace the existing code with the one below :
It essentially makes an API call to the managed endpoint using the custom connection created in step 1 and parses the fetched response. Validate and parse the input, map all the connections, and save.
@tool
def my_python_tool(message: str, myconn: CustomConnection) -> str:
# Get authentication key-values from the custom connection
url = myconn.api_base
api_key = myconn.api_key
data = {“input_data”: {
“input_string”: [
{
“role”: “user”,
“content”: message
},
],
“parameters”: {
“temperature”: 0.3,
“top_p”: 0.1,
“max_new_tokens”: 200
}
}}
body = str.encode(json.dumps(data))
headers = {‘Content-Type’:’application/json’, ‘Authorization’:(‘Bearer ‘+ api_key), ‘azureml-model-deployment’: myconn.deployment }
req = urllib.request.Request(url, body, headers)
try:
response = urllib.request.urlopen(req)
result = response.read()
result = json.loads(result.decode(‘utf-8’))[“output”]
return result
except urllib.error.HTTPError as error:
return “The request failed with status code: ” + str(error.code)
Step 7: Execute the Flow
Run the flow to ensure all connections are operational and error-free.
Step 8: Evaluate the Model
Click on ‘Evaluate’, select the desired metrics, and map the data accordingly. If using GPT-based metrics like Similarity-Score, ensure that the AOAI model is deployed and specify the deployment details in the evaluation section.
Step 9: Submit and Review Results
After you finish the input mapping, select on Next to review your settings and select on Submit to start the batch run with evaluation. After submission, you can find the batch run details in the run tab in PF page. Click on view output to check the response and associated metrics generated by the flow and if you want to download the results, you can click on “Export” tab and download the outputs in .csv format.
Step 10: Adding custom metrics to the list of built-in evaluation metrics
Create a new PF from the gallery, but this time select the “Evaluation Flow” card within the gallery section and select one of the evaluation flows. Here, we will select “QnA Groundedness Evaluation” from the gallery and add a custom BLEU score to measure the similarity of machine generated text with reference text. Add a new python tool to the end of the flow, replace the code with below snippet, update the mapping and save the flow.
Note:- If using any third-party dependency make sure to add the required library in requirements.txt within the Files section. In the below example, I am using nltk, hence update the same in requirements.txt
from promptflow import tool
from nltk.translate.bleu_score import sentence_bleu
@tool
def get_bleu_score(groundtruth: str, prediction: str):
ref_tokens = groundtruth.split()
pred_tokens = prediction.split()
return sentence_bleu(ref_tokens, pred_tokens)
Step 11: Submit a new batch evaluation
This time the newly added custom metric should show up in the customized evaluation section.
Select the required metrics, rerun the evaluation and compare the results.
Conclusion
Through Azure’s robust infrastructure and Prompt Flow, developers can efficiently evaluate different language models on custom datasets. This structured approach not only helps in making informed decisions but also optimizes model deployment in alignment with specific business and performance criteria.
Reference
What is Azure Machine Learning prompt flow – Azure Machine Learning | Microsoft Learn
Microsoft Tech Community – Latest Blogs –Read More
Same name with different domains
Hi,
Sometimes there are similars personnels names from different suppliers, and alot of us might send an email and cc’d email address removed for privacy reasons instead of email address removed for privacy reasons by mistake.
Is there any feature to make the outlook alert the sender incase the address bar has an email with different domain?
If there is, please to share domain link..
Thank you,
Ahmad
Hi, Sometimes there are similars personnels names from different suppliers, and alot of us might send an email and cc’d email address removed for privacy reasons instead of email address removed for privacy reasons by mistake.Is there any feature to make the outlook alert the sender incase the address bar has an email with different domain?If there is, please to share domain link.. Thank you, Ahmad Read More
Outlook 365 opening .msg attachment works once, then it goes to black screen at the next attachment
Hi,
I have a few users (me included) who are having a peculiar problem with Outlook 365. When they open a mail in Outlook which contains multiple .msg’s attachments, the first one opens just fine in the preview tab, but the other ones don’t open. I only get a white/black screen (depending on which theme you have installed). No matter which attachment you click first, the first one you click will always open, but the rest don’t.
If I then switch back to the first attachment which worked fine, that one stops working to. It only works when you double click and open them separately, but that is not a solution for the users.
What I’ve tried:
– All Office updates, it’s up-to-date now
– Quick repair of Office
– Full reinstall of Office
– Outlook.exe /safe
– Outlook.exe/ reset views
– Resetted the view in Outlook manually
– New mail profile
– Using cache or online mode
Nothing works and I can’t think of something else to do. Does anybody recognize this issue or knows what to do? I’ve included a screenshot of what we are getting
Hi, I have a few users (me included) who are having a peculiar problem with Outlook 365. When they open a mail in Outlook which contains multiple .msg’s attachments, the first one opens just fine in the preview tab, but the other ones don’t open. I only get a white/black screen (depending on which theme you have installed). No matter which attachment you click first, the first one you click will always open, but the rest don’t. If I then switch back to the first attachment which worked fine, that one stops working to. It only works when you double click and open them separately, but that is not a solution for the users. What I’ve tried:- All Office updates, it’s up-to-date now- Quick repair of Office- Full reinstall of Office- Outlook.exe /safe- Outlook.exe/ reset views- Resetted the view in Outlook manually- New mail profile- Using cache or online mode Nothing works and I can’t think of something else to do. Does anybody recognize this issue or knows what to do? I’ve included a screenshot of what we are getting Read More
Azure timer trigger function hangs for a few minutes and then resumes
Hello everyone,
I have to ask if anyone has any experience with a situation where a timer trigger function just hangs for some reason and then resumes after a few minutes (could be 2-3 minutes, could be up to 8).
I have a function app with a consumption plan and one function in it, which is a timer trigger function, that’s supposed to fetch some items through graphAPI and load them into the storage account queue.
Everything works well, but everytime it runs (every 10 minutes) it just freezes at some point just to resume after a few minutes.
I had to extend the functionTimeout value to 10 minutes, because prior to that it would just hang and the function would timeout. Now it’s at least able to do some processing before the timeout happens.
Here is a picture of from app insights of the last print before the hang:
And now the picture of a first print after if resumes:
As you can see, there is no error, traceback, anything. Also the hang happens quite early after the start (usually within the first 20s) and at a random point, so I couldn’t narrow it down to a faulty code.
I think the is some issue with the resources, host lock lease or any underlying architecture, but I don’t know where to look.
I’d gladly accept any advice or any new lead that I could chase.
Thanks!
Hello everyone, I have to ask if anyone has any experience with a situation where a timer trigger function just hangs for some reason and then resumes after a few minutes (could be 2-3 minutes, could be up to 8). I have a function app with a consumption plan and one function in it, which is a timer trigger function, that’s supposed to fetch some items through graphAPI and load them into the storage account queue.Everything works well, but everytime it runs (every 10 minutes) it just freezes at some point just to resume after a few minutes.I had to extend the functionTimeout value to 10 minutes, because prior to that it would just hang and the function would timeout. Now it’s at least able to do some processing before the timeout happens. Here is a picture of from app insights of the last print before the hang:And now the picture of a first print after if resumes:As you can see, there is no error, traceback, anything. Also the hang happens quite early after the start (usually within the first 20s) and at a random point, so I couldn’t narrow it down to a faulty code. I think the is some issue with the resources, host lock lease or any underlying architecture, but I don’t know where to look. I’d gladly accept any advice or any new lead that I could chase. Thanks! Read More
Populate Cells from another sheet based on dropdown selection
Hi All
Thanks for taking the time to read this, unfortunately I’m unable to achieve what i would like so hopefully someone can help. I would like to populate two different cells from whichever dropdown option I select based on data from another sheet. I can’t figure out how to link the cells to the dropdown option, and correlate that to the data. See below, when I select a dropdown option (C15) I would like it to fill the cells RX Freq (F15) & TX Freq (I15) dependant on the channel I select.
I have linked the channel cell to my sheet with the data and now have the options in dropdown form. The top image is from one sheet, the values are in another called ‘data’.
Any help much appreciated.
Hi All Thanks for taking the time to read this, unfortunately I’m unable to achieve what i would like so hopefully someone can help. I would like to populate two different cells from whichever dropdown option I select based on data from another sheet. I can’t figure out how to link the cells to the dropdown option, and correlate that to the data. See below, when I select a dropdown option (C15) I would like it to fill the cells RX Freq (F15) & TX Freq (I15) dependant on the channel I select. I have linked the channel cell to my sheet with the data and now have the options in dropdown form. The top image is from one sheet, the values are in another called ‘data’. Any help much appreciated. Read More
out of office status api to sharepoint
HI,
I am working on a project .. where i have list of users with some additional details.
I have developed a view from this list with script editor..
I am displaying all the information of each user including user photo.
What I am straggling with is how to fetch the user status.
and here is example of how i am retrieving the rest of the values
I found a solution to user Microsoft Graph API, so i have create a new application on Azure and add the following permissions:
after that i add this script on a testing page to test this feature:
<!DOCTYPE html>
<html lang=”en”>
<head>
<meta charset=”UTF-8″>
<title>Presence Test</title>
<script src=”https://alcdn.msauth.net/browser/2.15.0/js/msal-browser.min.js”></script>
</head>
<body>
<h1>Presence Test</h1>
<div id=”user1-presence”>User 1 Presence: <span id=”presence1″>Loading…</span></div>
<div id=”user2-presence”>User 2 Presence: <span id=”presence2″>Loading…</span></div>
<script>
const msalConfig = {
auth: {
clientId: ‘YOUR_CLIENT_ID’, // Replace with your Azure AD app client ID
authority: ‘https://login.microsoftonline.com/YOUR_TENANT_ID’, // Replace with your tenant ID
redirectUri: ‘YOUR_REDIRECT_URI’ // Replace with your redirect URI
}
};
const msalInstance = new msal.PublicClientApplication(msalConfig);
const loginRequest = {
scopes: [‘Presence.Read’]
};
function fetchUserPresence(userEmail, callback) {
msalInstance.loginPopup(loginRequest).then(loginResponse => {
return msalInstance.acquireTokenSilent(loginRequest);
}).then(tokenResponse => {
fetch(`https://graph.microsoft.com/beta/users/${userEmail}/presence`, {
method: ‘GET’,
headers: {
‘Authorization’: `Bearer ${tokenResponse.accessToken}`,
‘Content-Type’: ‘application/json’
}
})
.then(response => response.json())
.then(data => {
callback(data);
})
.catch(error => {
console.error(“Error fetching presence data:”, error);
callback({ availability: ‘Unknown’ });
});
}).catch(error => {
console.error(“Error during authentication:”, error);
callback({ availability: ‘Unknown’ });
});
}
document.addEventListener(‘DOMContentLoaded’, function() {
const user1Email = ’email address removed for privacy reasons’; // Replace with the actual email of user 1
const user2Email = ’email address removed for privacy reasons’; // Replace with the actual email of user 2
fetchUserPresence(user1Email, function(presence) {
document.getElementById(‘presence1’).innerText = presence.availability;
});
fetchUserPresence(user2Email, function(presence) {
document.getElementById(‘presence2’).innerText = presence.availability;
});
});
</script>
</body>
</html>
The results i am getting is: Unknown
I feel like there is something small i need to fix it here
could you please help !
HI, I am working on a project .. where i have list of users with some additional details. I have developed a view from this list with script editor.. I am displaying all the information of each user including user photo. What I am straggling with is how to fetch the user status. and here is example of how i am retrieving the rest of the values I found a solution to user Microsoft Graph API, so i have create a new application on Azure and add the following permissions: after that i add this script on a testing page to test this feature: <!DOCTYPE html>
<html lang=”en”>
<head>
<meta charset=”UTF-8″>
<title>Presence Test</title>
<script src=”https://alcdn.msauth.net/browser/2.15.0/js/msal-browser.min.js”></script>
</head>
<body>
<h1>Presence Test</h1>
<div id=”user1-presence”>User 1 Presence: <span id=”presence1″>Loading…</span></div>
<div id=”user2-presence”>User 2 Presence: <span id=”presence2″>Loading…</span></div>
<script>
const msalConfig = {
auth: {
clientId: ‘YOUR_CLIENT_ID’, // Replace with your Azure AD app client ID
authority: ‘https://login.microsoftonline.com/YOUR_TENANT_ID’, // Replace with your tenant ID
redirectUri: ‘YOUR_REDIRECT_URI’ // Replace with your redirect URI
}
};
const msalInstance = new msal.PublicClientApplication(msalConfig);
const loginRequest = {
scopes: [‘Presence.Read’]
};
function fetchUserPresence(userEmail, callback) {
msalInstance.loginPopup(loginRequest).then(loginResponse => {
return msalInstance.acquireTokenSilent(loginRequest);
}).then(tokenResponse => {
fetch(`https://graph.microsoft.com/beta/users/${userEmail}/presence`, {
method: ‘GET’,
headers: {
‘Authorization’: `Bearer ${tokenResponse.accessToken}`,
‘Content-Type’: ‘application/json’
}
})
.then(response => response.json())
.then(data => {
callback(data);
})
.catch(error => {
console.error(“Error fetching presence data:”, error);
callback({ availability: ‘Unknown’ });
});
}).catch(error => {
console.error(“Error during authentication:”, error);
callback({ availability: ‘Unknown’ });
});
}
document.addEventListener(‘DOMContentLoaded’, function() {
const user1Email = ’email address removed for privacy reasons’; // Replace with the actual email of user 1
const user2Email = ’email address removed for privacy reasons’; // Replace with the actual email of user 2
fetchUserPresence(user1Email, function(presence) {
document.getElementById(‘presence1’).innerText = presence.availability;
});
fetchUserPresence(user2Email, function(presence) {
document.getElementById(‘presence2’).innerText = presence.availability;
});
});
</script>
</body>
</html> The results i am getting is: Unknown I feel like there is something small i need to fix it here could you please help ! Read More
How to make testcase workitem type mandatory while creating bug in azure
I have to make mandatory rule , as soon any bug get logged , its mandatory to have a test case linked with same.
I have to make mandatory rule , as soon any bug get logged , its mandatory to have a test case linked with same. Read More
GetPropertiesFor not returning properties with “Only Me” privacy
Hi,
I’m trying to get user profile properties with privacy set to “Only Me” using REST API.
I’m using url:
https://<company_name>-admin.sharepoint.com/_api/SP.UserProfiles.PeopleManager/getpropertiesfor(accountName=@v)?@v=’i:0%23.f|membership|test.user@<company_name>.com’
It returns lot o properties but properties with privacy “Only Me” are not present.
If I change accountName to the same as I’m using to make request “Only Me” properties are present.
If I change “Only Me” to “Everyone”, property is present is response.
I’m using account with admin privileges and can see/edit those properties on sharepoint admin web page. I expect the same behavior and REST API. Any idea how to get those properties?
Hi, I’m trying to get user profile properties with privacy set to “Only Me” using REST API.I’m using url: https://<company_name>-admin.sharepoint.com/_api/SP.UserProfiles.PeopleManager/getpropertiesfor(accountName=@v)?@v=’i:0%23.f|membership|test.user@<company_name>.com’ It returns lot o properties but properties with privacy “Only Me” are not present. If I change accountName to the same as I’m using to make request “Only Me” properties are present.If I change “Only Me” to “Everyone”, property is present is response. I’m using account with admin privileges and can see/edit those properties on sharepoint admin web page. I expect the same behavior and REST API. Any idea how to get those properties? Read More
building a spread sheet with data averages.
I have created a spread sheet that tracks the average of data for the year. Can I enter two formulas to a cell.
I am using 1 formula to calculate each month=AVERAGE(), this works correctly.
But since some months have no data entered, I receive the #DIV/0!
I understand as I enter each month data the message goes away.
My total for the year will not calculate as long as the #DIV/0! is listed.
I have created a spread sheet that tracks the average of data for the year. Can I enter two formulas to a cell.I am using 1 formula to calculate each month=AVERAGE(), this works correctly.But since some months have no data entered, I receive the #DIV/0!I understand as I enter each month data the message goes away.My total for the year will not calculate as long as the #DIV/0! is listed. Read More
Microsoft Defender for Cloud Apps session policy does not work for Sesitivity Label file
we are suing Microsoft Defender For Cloud Apps with the goal of implementing controls to prevent users from downloading sensitive labelled documents to unmanaged/personal devices
To accomplish this, in MDFCA we created a Session Control policy to block these activities for test users accessing M365 via a web browser. The policy configuration is below:
– Session Control type: Control file download (with inspection)
– Activities matching all of the following:
o App equals Microsoft Online Services (and all sub-services)
o User Name equals [test users]
o Device Tag does not equal Hybrid Azure AD Joined, Valid Client Certificate
– Files matching all of the following:
o Sensitivity label equals [sensitive labels]
– Inspection method: None
– Actions: Block
we are suing Microsoft Defender For Cloud Apps with the goal of implementing controls to prevent users from downloading sensitive labelled documents to unmanaged/personal devicesTo accomplish this, in MDFCA we created a Session Control policy to block these activities for test users accessing M365 via a web browser. The policy configuration is below:- Session Control type: Control file download (with inspection)- Activities matching all of the following:o App equals Microsoft Online Services (and all sub-services)o User Name equals [test users]o Device Tag does not equal Hybrid Azure AD Joined, Valid Client Certificate- Files matching all of the following:o Sensitivity label equals [sensitive labels]- Inspection method: None- Actions: Block Read More
Weird issue pasting repeatedly from clipboard whenever I drag a task or subtask
I have a very weird issue since updating to 2.114.7122.0 – when dragging an item around the screen, either a task or a subtask/checklist item. It basically slows right down and repeatedly pastes from my clipboard into the New Task or sometimes Notes field while I’m dragging (it seems to depend on where I have clicked recently), and so I end up with stuff in there like
LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568
I have reinstalled the app and it’s still the same.
I have a very weird issue since updating to 2.114.7122.0 – when dragging an item around the screen, either a task or a subtask/checklist item. It basically slows right down and repeatedly pastes from my clipboard into the New Task or sometimes Notes field while I’m dragging (it seems to depend on where I have clicked recently), and so I end up with stuff in there like LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568LY2405080568 I have reinstalled the app and it’s still the same. Read More
Unable to create Community Live event
I have an E3 license and am Unable to create the Live event from Viva Engage Community Shows following error:
“A Teams and/or Stream license and permissions are required to produce a live event. Please contact your IT admin.“
Please confirm which admin setting or license is required to create live event
I have an E3 license and am Unable to create the Live event from Viva Engage Community Shows following error:”A Teams and/or Stream license and permissions are required to produce a live event. Please contact your IT admin.” Please confirm which admin setting or license is required to create live event Read More
How to Recover and Reset QuickBooks Desktop Password
I’m having trouble accessing my QuickBooks Desktop account because I forgot my password. Can you guide me on how to recover and reset my QB Desktop password? Any help would be greatly appreciated. Thanks!
I’m having trouble accessing my QuickBooks Desktop account because I forgot my password. Can you guide me on how to recover and reset my QB Desktop password? Any help would be greatly appreciated. Thanks! Read More
Copilot no connection
Good afternoon. I can’t use the Copilot app. Error: poor connection or I suggest logging in with a different account and checking the settings. I updated the app, turned off the system and logged back into my account. Nothing helps. Is there a solution to this problem?
Good afternoon. I can’t use the Copilot app. Error: poor connection or I suggest logging in with a different account and checking the settings. I updated the app, turned off the system and logged back into my account. Nothing helps. Is there a solution to this problem? Read More
Crashing Version 127.0.2604.0
Again crashes on certain pages. For example https://t.co/lj6OAfb38v
Again crashes on certain pages. For example https://t.co/lj6OAfb38v Read More
Search Crawler Error 0*80041205
Please help in getting this resolved search crawling is not working, i found there is error in ULS Logs
Search Crawler error 0*80041205
401 Unauthorized access
Please help in getting this resolved search crawling is not working, i found there is error in ULS LogsSearch Crawler error 0*80041205 401 Unauthorized access Read More