Tag Archives: microsoft
Trying to Build Truly Non-Persistent Machines
I’m trying to test a setup that will be used for public machines in a library setting through AVD and thin clients. To do this, I’ve built an AVD pool, which (according to documentation) is supposed to be completely non-persistent. However, I’ve noticed that if a user logs in and makes changes (creates files, changes settings, etc.) and then logs out, if they connect to the same virtual machine the next time they log in, their changes remain.
This isn’t non-persistent as advertised, and is definitely a problem. Since the machines this is meant to be testing for will have potentially hundreds of users logging in, changes cannot remain when they log off. Does anyone know of a way to actually make these machines non-persistent?
An important note: We won’t be able to utilize 3rd-party software like Nerdio (which I see a lot of people recommend, and which I wish I could use) because we’re a public/government organization, and there’s just no budget for a tool like that.
I’m trying to test a setup that will be used for public machines in a library setting through AVD and thin clients. To do this, I’ve built an AVD pool, which (according to documentation) is supposed to be completely non-persistent. However, I’ve noticed that if a user logs in and makes changes (creates files, changes settings, etc.) and then logs out, if they connect to the same virtual machine the next time they log in, their changes remain.This isn’t non-persistent as advertised, and is definitely a problem. Since the machines this is meant to be testing for will have potentially hundreds of users logging in, changes cannot remain when they log off. Does anyone know of a way to actually make these machines non-persistent?An important note: We won’t be able to utilize 3rd-party software like Nerdio (which I see a lot of people recommend, and which I wish I could use) because we’re a public/government organization, and there’s just no budget for a tool like that. Read More
Change a custom color palette
Afternoon all,
An ex-employee set up our SharePoint site with our corporate colours, however, he’s managed to get the rollover background colour to be the same as the text and our buttons have a red background with navy text, both are unreadable.
I can not for the life of me find out how to edit just these two little bits; the rest of the site looks great.
Any help would be fantastic, thanks.
Ed
Afternoon all, An ex-employee set up our SharePoint site with our corporate colours, however, he’s managed to get the rollover background colour to be the same as the text and our buttons have a red background with navy text, both are unreadable. I can not for the life of me find out how to edit just these two little bits; the rest of the site looks great. Any help would be fantastic, thanks.Ed Read More
Power Apps License usage – Graph API endpoint
Hi All
Is there a Microsoft Graph API endpoint that provides insights into the utilization versus purchase of Power Apps licenses, detailing which environments they are used in and by whom?
How can we achieve this?
Thanks
Kumar
Hi All Is there a Microsoft Graph API endpoint that provides insights into the utilization versus purchase of Power Apps licenses, detailing which environments they are used in and by whom?How can we achieve this? ThanksKumar Read More
Cardinality Estimation Feedback Explained by Kate Smith (@akatesmith)
The Problem of Cardinality Estimation
There is a single main query processing “engine” that powers SQL Server and Azure SQL Database. Within that engine is the part of a program that optimizes and executes queries. We call the process of optimizing and executing a query “Query Processing” or QP for short. Optimization is a step in which the query is, or can be, rewritten or reorganized to semantically equivalent but faster/more efficient operators. A critical part of that process is determining the approximate sizes of different sub-parts of a query. Think answers to questions like: “How many rows will this join return?” or “How many rows will be filtered out of this table with this clause?” This information is used by the optimizer at large to determine the right operators and right order of operations for best performance, and the process of generating these size-of-sub-result numbers is called cardinality estimation, or CE.
CE is based on two things:
Statistical information about the underlying data as stored in histograms.
General rules, approaches, or assumptions on how to combine sub-results and interpret histograms. These rules, taken together, are called the model.
The Model
The CE Model is a nuanced part of the estimation component, as it attempts to create general rules that will apply to specific situations, with limited ability to detect specifics about any one given situation. Take, for example, the rules within the model that determine how we estimate rows coming out of a join. Sometimes, the two tables being joined have an inherent relationship. The example we’ve used most often for this scenario is ‘Sales’ and ‘Returns’. It holds that every entry in the ‘Returns’ table must have a corresponding entry in the ‘Sales’ table. If the model assumes this kind of a relationship between any two tables, it would be effective for queries that match this pattern.
However, consider the situation where two tables are not related in such a direct way – and any relationship between them becomes apparent only after certain filters are applied. Take for example a table tracking the languages people speak and people’s addresses. Only after filtering for people living in, say, Quebec on one side and filtering for people who speak French on the other would you have a kind of implied relationship. (Many people who live in Quebec speak French). There is no inherent relationship of meaning between the two tables – the meaning is only added by the filters. Therefore, a different assumption on the part of CE – that the filters impute meaning into the join relationship – is also a very valid way to approach estimating joins between two tables.
So now we’ve just presented, at a high level, two different model assumptions, and given examples where one model fits but the other does not. The problem in software is that we need a single general rule – we can’t know BEFORE executing the query which rule is the better rule for any given situation.
This is the challenge of the model – finding that perfect single set of rules that will work for all (or at least a majority of) situations.
It can be helpful to think of the model across three basic dimensions:
Correlation to Independence
Data uniformity to skewed data
Semantic relationships between tables being implicit in structure, or implied by filters
Of these three dimensions, the first two exist on a little bit of a continuum, whereas the last is a bit of a toggle – either one model or the other, but not really a sliding scale in between. Across these multiple dimensions, different data sets and different query patterns can be anywhere in the space of what assumptions would work best for each query. Some patterns may be more common than others, but each pattern is valid. So, how do you design a system that will work in a general way?
Before SQL Server 2022, we had to pick a single default model for all queries. There were ways that advanced users could debug a query and possibly add trace flags or hints to a query to force a different model. That process was tedious, time-consuming, and required expert knowledge. We had to select a default model, and that was pretty much what customers got.
Enter Cardinality Estimation (CE) Feedback
CE Feedback was introduced in SQL Server 2022 and is now also available Azure SQL Database.
This feature tailors the model to each query after a few executions – ensuring any changes made do not regress the query.
CE Feedback can tailor the model to an individual query, in a do no harm way. Where before, we had only one model that was used for all queries, we can now modify the model used for each query appropriately.
How it works
A query is executed repeatedly (using the default model) and captured by the Query Store.
If the query estimates seem “off” or like they could use improvement, we identify one or more ways in which a different model might address the problematic estimate.
We try out the new model and evaluate:
Are estimates better?
Is performance better?
Did the query plan change?
If the answer to all three of these questions is ‘yes’, we persist this new model for this query only. It will be stored as a query store hint.
We try only one model variant at a time but can continue to identify further model variants to be combined with the ones we have already found, so the process may repeat and refine.
This all happens without user action or input.
CE Feedback works only for queries that are executed repeatedly, and we must have the query information in Query Store. Each query starts with the default model for estimation. After the requisite number of executions, we can determine if a different model might work better for the query – we make these decisions based on the accuracy of our estimates. If a new model variant could be tried, the CE Feedback framework tries it out on the next execution. If the plan changes and the estimates/runtime are better, we keep the new variant. If the plan does not change, or if the estimates do not improve, we do NOT keep the change. Keeping the change involves adding a hint (using Query Store Hints) to the query which will be persisted and used for that query for all future executions.
It sounds very simple, right? Identify a problem, try a solution, and if the solution works, keep it. All without user input or action. So, if your workload contains a range of queries whose optimal model spans our model dimensions, we can adjust the model for each query, with no action on your part, to be optimal or closer to optimal than the default.
CE Feedback customizes the model used for each query automatically, without user action.
This feature reduces the limitations of a one-model-fits-all approach.
The model and available changes
Each query still starts/begins with the default CE Model for database compatibility level 160, and there are three currently implemented model changes that are available for users. In this section, we’ll discuss the available dimensions, explain where the model starts, and explain the variants available.
Independence vs. Containment
Relevant scenario:
The query has multiple filters on a table joined by an ‘AND’ clause. Something like:
WHERE City = ‘Seattle’ AND State = ‘WA’
In this case, we have two predicates:
p1: City = ‘Seattle’
p2: State = ‘WA’
They are joined in a conjunctive clause. Each predicate has a selectivity, or the probability that the predicate is true for a given row in the database. In this case, assume the selectiveness of p1 and p2 are s1 and s2. Selectivity values are always less than or equal to 1. So, if 1/3 of the rows in the table match a filter, that filter’s selectivity would be 0.33.
As we work through this example, it’s useful to remember that in probability theory, the probability of two independent events occurring is the product of the probabilities of each event respectively. As an example, for a fair coin toss, we must multiply the probability of flip 1 being heads (0.5) by the probability of flip 2 being heads (0.5) to get the probability of BOTH flips being heads (0.25).
Default Model:
The default model along this spectrum is a middle ground between assuming complete independence and assuming complete dependence between the filters. This is called exponential backoff. To find the selectivity of the conjunction, we order the “selectivities” from most to least selective, and then reduce the impact of successive selectiveness by taking them to the 1/ ½^nth power. That is, if we have 3 predicates in order of most to least selective of s1, s3, s2, the selectivity of p1 AND p2 AND p3 is:
Selectivity of conjunction = s1 * s3^(1/2) * s2^(1/4)
Relevant Hint:
You get this model without any hint as it is the default, but if you wanted to force it, you could use the hint: ASSUME_PARTIAL_CORRELATION_FOR_FILTER_ESTIMATES (See this documentation link for more details)
Variants available:
For this model, we have two available variants. One is complete independence, and one is complete dependence.
Independence
The independence model is the most selective option, and it simply multiplies the “selectivities” of each predicate, a known principle from probability theory. Thus, the selectivity of the conjunction becomes s1 * s2 * s3. Given that all the “selectivities” are numbers less than or equal to one, this number will be smaller than the exponential backoff number computed above. We use this when we are overestimating the number of rows returned by a filter with a conjunctive clause.
Relevant Hint:
ASSUME_FULL_INDEPENDENCE_FOR_FILTERS
Dependence
The dependence model is the least selective option, as it simply takes the single most selective predicate and uses that as the selectivity of the conjunction. Thus, returning only s1. The case of above seems to fit this model best – everything located in Seattle is also located in Washington state. So, once we have reduced to the Seattle scope, we do not need to further reduce for Washington. We use this model when we are underestimating the rows returned by a conjunction using the default model.
Relevant Hint:
ASSUME_MIN_SELECTIVITY_FOR_FILTERS
Uniformity
For the database compatibility level 160 version of CE feedback, we only make one change that has to do with the overall model assumption of uniformity. This has to do with using ‘Row Goal’ during optimization. There are other scenarios and ways in which the uniformity assumption is used that we do not modify at all using CE Feedback currently.
Relevant Scenario and default model:
A user executes a query with a ‘Top N’ or ‘Fast N’ or ‘IN’ construct. The optimizer looks at the number of rows that it thinks will match any underlying filters and chooses to use a ‘Row Goal’ optimization – assuming that we can scan just a small number of rows in the table to fulfill the specified number of rows or relevant clause.
Alternative model:
However, due to the way that we build and interpret histograms, it is possible to incorrectly overestimate the number of rows and assume that a small scan will meet whatever goal is set by the query. When this happens, the query ends up scanning a lot more rows during execution than expected, and likely an index seek (which, instead of paging in all rows, pages in a targeted set of rows matching given predicates) would have been a more appropriate choice.
Relevant Hint:
DISABLE_OPTIMIZER_ROWGOAL
Join Relationships (a.k.a. join containment)
Relevant Scenario and default model:
This scenario is as described above in the section introducing the concept of the model – with ‘Sales’ and ‘Returns’ tables. We have two tables that are joined together, with filters on one or both tables. The current default model assumes that there is an underlying relationship between the tables (think ‘Sales’ and ‘Returns’). So, we estimate the join assuming a containment relationship, and scale down the number of rows from the join based on the filters. We sometimes call this “Base Containment”.
Alternative Model
Since the CE doesn’t know anything about the underlying semantic relationship that may or may not exist in a user query, we have an alternate way of estimating joins if the default model seems like it could be improved. This other option assumes that the filters impute meaning into the join, and so we estimate the filters first, followed by estimating the join based on assuming an underlying relationship between the rows returned from the filters. We sometimes call this “Simple Containment”.
It is not guaranteed that the new model will always be better, but if we think one model creates poor overall estimates, we may try the other option to see if it results in better estimates. It may not, and we will not keep the change if it does not.
Relevant Hint:
ASSUME_JOIN_PREDICATE_DEPENDS_ON_FILTERS
Limitations
As with everything in the query optimizer, there is no magic button that will solve all estimation errors. There may still be queries with out of model constructs that we cannot fix with CE Feedback today. And sometimes, under a new model, even when estimates get better, the plan does not change, or performance doesn’t improve enough to justify keeping the new model. And of course, we have only a handful of model variants that we can try with CE Feedback, so, the ideal model, or assumption, may not exist for a given query.
Conclusion
This was a whirlwind tour of cardinality estimation, the model, and the limitations of the model. We provided an overview of what CE Feedback does to address the limitation of a single fixed model, with explanations of the three model variants that are currently supported by CE feedback.
Drop a comment below if you think of a specific model variation that you’d like to see us add.
Microsoft Tech Community – Latest Blogs –Read More
Enhancing Copilot Studio Extensions for SAP by using Adaptive Cards and Principal Propagation
In the previous blog, SAP connectivity from the Copilot Studio and Power Platform were explored, let’s now look at an enhanced version of the scenario explained in the previous blog in the newer video below:
Scenario description:
Let’s say, the salesperson realizes that material in the sales order placed by the customer is not available and wants to help replace the unavailable item for the customer in the sales order.
The salesperson does the following:
The salesperson asks the Copilot to help her look through all the materials in the SAP system to find the best replacement.
The salesperson gets a suitable replacement suggestion from Copilot and tries to access material stock information for that material. However, the salesperson is not authorized to do so as she does not have the right authorization to do so in the SAP system.
The salesperson messages her colleague who does have the authorization to check material stock information for her. The colleague checks the stock information and informs the salesperson that the material is in stock.
The salesperson now decides to update the sales order with the new material and remove the old material from the sales order.
As you can see from the video, the scenario has been enhanced and has additional two Power Platform and Copilot Studio abilities added to it:
Adaptive cards
Authorization/ Principal Propagation
Below is a description what they are and how to begin to implement them.
Adaptive cards:
Information returned via chatbots shouldn’t be restricted to looking only as good as the UI of the platform you deploy the bot on, it would be more interactive and personal to your brand to have a way to choose how users get to interact with the bot you created. This is where adaptive cards are a game changer.
Adaptive cards in Copilot Studio allow you to add interactive snippets of content, such as text, graphics, and buttons, to enhance conversation experiences with Copilots. You can read more about them here.
Here are some examples in the above scenario where adaptive cards were used:
1) To display the adaptive card with information from the SAP system in a digestible and visually appealing format with the SAP logo to show the information is from the SAP system
2)To create a form like input while modifying the sales order to provide an easy way to get information from the user with the SAP logo to show the change will be made to the SAP system.
To create and modify adaptive cards in the Copilot Studio, you can add an adaptive card either to a question or the message as shown below.
You can then modify the code in JSON ( as shown below) to make it look the way you want, add URLs for images/logos you would like to show. You can also use the adaptive card designer to have a better idea of the elements you can make use of.
The code for the adaptive cards used in the scenario are on the GitHub Repo.
Principal Propagation for Authorization:
Principal propagation ensures that a user’s identity is securely passed from one system to another, allowing for proper authorization and access control. It plays a crucial role in maintaining security and seamless user experiences across different systems.
In this case, it ensures that an M365 user has the right access to the SAP system to access information without them having to use their SAP credentials to login. Here is an outline of the steps to implement this:
Set Up Microsoft Entra ID: Entra ID serves as the central identity provider for your applications. It manages user identities, authentication, and access control. When a user logs in, AAD validates their credentials and issues tokens (such as JWT) that represent their identity.
Configure Azure API Management (APIM): APIM acts as a gateway for APIs, managing their exposure, security, and policies. It handles requests from clients and routes them to the appropriate backend services. In APIM, you configure AAD authentication for your APIs. When a client makes a request, APIM validates the token with AAD to ensure the user’s identity. Additionally, APIM forwards requests to the backend system (e.g., SAP) based on the API configuration, including features like caching and rate limiting.
Flow of Principal Propagation:
User requests an APIM endpoint.
APIM validates the user’s token with AAD.
If valid, APIM extracts the user’s identity.
APIM forwards the request to SAP with the user’s identity.
SAP uses this identity for authorization. SAP accepts only tokens they issued themselves and therefore we need to interact with their Identity Provider.
Integrating with Power Platform: You can create a custom connector in Power Platform that calls the APIM API. This allows you to seamlessly incorporate principal propagation into your Power Automate flow. You can read more about creating custom connector here.
Here are resources that discuss Principal Propagation and the steps associated with its implementation in detail:
Configure SAP Principal Propagation with AAD and SAP OAuth server
To learn more about how to implement this new version of the scenario visit the GitHub repo that has detailed instructions as well as Power Automate flows you can readily import to your environment.
Microsoft Tech Community – Latest Blogs –Read More
(Preview) How Office is improving the reliability of DLP policy tips on Windows
If you are an administrator who uses SharePoint and OneDrive DLP rules to protect your organization’s sensitive files, you may have noticed that the policy tips informing users about the policies applied to their files are not always reliable in Word, Excel, and PowerPoint on Windows.
We’ve heard your feedback and are happy to announce that we’re rolling out a new update to Office apps on Windows that will significantly improve the reliability of DLP policy tips for files stored in SharePoint and OneDrive for Business. This update will change how Office determines whether a policy tip should be shown for a file. Instead of trying to evaluate the policy rules on its own, Office will now show the same policy tip that is shown in SharePoint and OneDrive for Business. This will ensure that SharePoint or OneDrive for Business is always the source of truth, and Office will stay in sync on whether policy tips should be shown.
This update also eliminates the need for Office to download and cache policy XML files that contain potentially sensitivity information.
What you need to know
Here are some important details about how the new update will work and what you can expect:
The update will apply to files that live in ODSP (either opened from the host or from a sync-backed folder), and that are opened with a work account (an AAD identity) that is properly licensed for ODSP DLP.
The update will only work online, with no caching. If the user is offline, no policy tip UX will be shown. If the user goes offline after they were online and a policy tip was shown, the policy tip will remain visible.
The update will not affect the existing options to override a policy tip, report a false positive, or turn off policy tip UX. These features will continue to work as before.
How to get this update
This updated feature is currently rolling out in Current Channel Preview. We hope that it enhances your experience with ODSP DLP policy tips by making them more consistent and accurate. We appreciate your feedback and look forward to delivering more improvements in the future!
Microsoft Tech Community – Latest Blogs –Read More
WAC Certificate Issue – PKI Signed Cert cannot be used
WAC Certificate Issue
when i change in appsettings.json the line
“Subject”: “WindowsAdminCenterSelfSigned”
to
“Subject”: “CN=admincenter.domain.int”
and change the netsh config:
netsh http delete sslcert ipport=0.0.0.0:443
netsh http add sslcert ipport=0.0.0.0:443 certhash=81893C1D789EA40EC8FC04FD08DB72DD44F2FBB1 appid=”{afebb9ad-9b97-4a91-9ab5-daf4d59122f6}”
restart-service WindowsAdminCenter
the WAC is not Accessible, because the WAC Service Cannot be started!
Why you did not use the Thumbprint in the appsettings.json file? because same Subject like servername.domain.int can be used multiple Times in a cert.
On the Other hand Thumbprint is fixed size length, Subject can be very long … like
“E=email address removed for privacy reasons, CN=admincenter.domain.internal, OU=Domain, O=company, L=munich, C=DE”
WAC Certificate Issuewhen i change in appsettings.json the line “Subject”: “WindowsAdminCenterSelfSigned” to “Subject”: “CN=admincenter.domain.int” and change the netsh config:netsh http delete sslcert ipport=0.0.0.0:443netsh http add sslcert ipport=0.0.0.0:443 certhash=81893C1D789EA40EC8FC04FD08DB72DD44F2FBB1 appid=”{afebb9ad-9b97-4a91-9ab5-daf4d59122f6}”restart-service WindowsAdminCenterthe WAC is not Accessible, because the WAC Service Cannot be started!Why you did not use the Thumbprint in the appsettings.json file? because same Subject like servername.domain.int can be used multiple Times in a cert.On the Other hand Thumbprint is fixed size length, Subject can be very long … like”E=email address removed for privacy reasons, CN=admincenter.domain.internal, OU=Domain, O=company, L=munich, C=DE” Read More
Outlook Web apps not showing up
I am on the outlook website and the apps are not showing up when I go to the add apps page.
My add-ins have also disapeared that I have been using for months. I have tried reinstalling but nothing happens.
It seems to be on just this account.
I am on the outlook website and the apps are not showing up when I go to the add apps page.My add-ins have also disapeared that I have been using for months. I have tried reinstalling but nothing happens. It seems to be on just this account. Read More
Removing the MS Sign-On from Startup: A Step-by-Step Guide
I am currently the sole user of this computer, and I accidentally entered an email address which now prompts me for a password every time the system starts up. How can I disable this login requirement so that I only need to press Enter or the space bar to access my computer?
Thank you for your assistance in resolving this issue.
I am currently the sole user of this computer, and I accidentally entered an email address which now prompts me for a password every time the system starts up. How can I disable this login requirement so that I only need to press Enter or the space bar to access my computer? Thank you for your assistance in resolving this issue. Read More
Followup sheet – Macro
Hello everyone,
After days of failing, I have come here to ask for your help gurus please!!
I am working on a VBA project in Excel that involves three main tabs:
Mailing: This tab contains a table with client names and emails. I have a button that allows sending emails to all clients listed in the table. This part is working perfectly!!!.
Followup: In this tab, I want to retrieve the 100 most recent emails from my inbox and fill them into a table. The goal is to read these emails and compare them with the emails sent from the “Mailing” tab to check if I have received responses from my clients.
Inbox: This tab serves as a database where the 100 most recent emails are stored. This is working as well!!
What I need:
WORKING – Retrieve the 100 most recent emails: I need help writing a VBA code that retrieves the 100 most recent emails from my Outlook inbox and fills the “Inbox Emails” tab.NOT WORKING – Compare and update status: After filling the “Inbox Emails” tab, I need the VBA to read each of the 100 emails and compare them with the emails sent listed in the “Mailing” tab. If there is a response from a client, the status should be updated to “New Answer”. If there is no response, the status should be “Waiting”.
I appreciate any help in advance and am available to clarify any questions. Please reach to me in any platform!
Hello everyone,After days of failing, I have come here to ask for your help gurus please!! I am working on a VBA project in Excel that involves three main tabs:Mailing: This tab contains a table with client names and emails. I have a button that allows sending emails to all clients listed in the table. This part is working perfectly!!!.Followup: In this tab, I want to retrieve the 100 most recent emails from my inbox and fill them into a table. The goal is to read these emails and compare them with the emails sent from the “Mailing” tab to check if I have received responses from my clients.Inbox: This tab serves as a database where the 100 most recent emails are stored. This is working as well!!What I need:WORKING – Retrieve the 100 most recent emails: I need help writing a VBA code that retrieves the 100 most recent emails from my Outlook inbox and fills the “Inbox Emails” tab.NOT WORKING – Compare and update status: After filling the “Inbox Emails” tab, I need the VBA to read each of the 100 emails and compare them with the emails sent listed in the “Mailing” tab. If there is a response from a client, the status should be updated to “New Answer”. If there is no response, the status should be “Waiting”.I appreciate any help in advance and am available to clarify any questions. Please reach to me in any platform! Read More
Adding a clarification question in Forms if a minimum score is returned
Hi we are in the process of moving some of our documents into forms. In one of them users are asked to score their abilities for a task between 1 and 10.
I would like to be able to ask a clarification if a responder answers 8, 9 or 10 to each response .
Is this possible? I cannot see in Branching how to determine what answer is given in order to a follow up question is required.
Hi we are in the process of moving some of our documents into forms. In one of them users are asked to score their abilities for a task between 1 and 10. I would like to be able to ask a clarification if a responder answers 8, 9 or 10 to each response . Is this possible? I cannot see in Branching how to determine what answer is given in order to a follow up question is required. Read More
MDE API to trigger custom detection rule run
Hi All,
We are deploying MDE custom detections to a new site via pipeline and some scripts using the API.
But since we are deploying and enabling the rules in groups, their last/next run are all the same in the group (especially for the ones with 12hrs/24hrs periods)
For now, only way I could find for changing the running start time is running the rule manually.
Is there a better way/API endpoint to run/change the periodic run time of the rules? If yes, with a script I can better disperse the rule periodic run times throughout the day.
Thanks in advance
Emin
Hi All, We are deploying MDE custom detections to a new site via pipeline and some scripts using the API. But since we are deploying and enabling the rules in groups, their last/next run are all the same in the group (especially for the ones with 12hrs/24hrs periods)For now, only way I could find for changing the running start time is running the rule manually. Is there a better way/API endpoint to run/change the periodic run time of the rules? If yes, with a script I can better disperse the rule periodic run times throughout the day. Thanks in advanceEmin Read More
Polls disappear when admitting external users to meeting
We are having trouble lately with polls within Teams webinars. As soon as the first external participant joins, the polls disappear. We put on many training webinars a month and rely on polling for interaction. Has anyone found a fix for this? Is it a known issue?
I found the below thread but it has more questions than answers:
We are having trouble lately with polls within Teams webinars. As soon as the first external participant joins, the polls disappear. We put on many training webinars a month and rely on polling for interaction. Has anyone found a fix for this? Is it a known issue?I found the below thread but it has more questions than answers:https://answers.microsoft.com/en-us/msteams/forum/all/teams-poll-option-getting-disappeared-during-the/d642ce2f-f772-4f41-8d69-f0835d2dd0d5https://answers.microsoft.com/en-us/msteams/forum/all/polls-disappearing-in-teams-live-webinar-as-soon/fea4f5c5-42ef-4fc2-9d96-0c3b1774b96a Read More
Fixing a Windows Boot Problem on a PC
Recently, my Samsung Galaxy Pro 360 underwent a significant software update. Following this update, my PC experienced a problematic boot sequence. Upon restarting, Windows would only partially load, displaying a limited taskbar and failing to proceed further. Despite the mouse functionality, the system would ultimately freeze, requiring a forced shutdown. This frustrating cycle persisted, with intermittent instances of Windows failing to load entirely and displaying the dreaded blue screen error message.
Before this update, my PC operated without any complications. To address the boot problem, I ran troubleshooting commands such as DISM and SFC /scannow, but to no avail. Seeking assistance from Samsung Support, I inquired about re-flashing the BIOS, only to be informed that it was not possible. This contrasted with my past experiences with Asus products, where I had successfully performed BIOS flashings independently. The support team’s proposed solution was to either reinstall Windows or consider purchasing a new laptop at a discounted rate.
In response to this conundrum, I have opted to acquire a new drive and will attempt to resolve the boot issue by implementing this hardware change. The underlying cause of this perplexing boot problem remains elusive, prompting further investigation into potential factors contributing to this ordeal.
Recently, my Samsung Galaxy Pro 360 underwent a significant software update. Following this update, my PC experienced a problematic boot sequence. Upon restarting, Windows would only partially load, displaying a limited taskbar and failing to proceed further. Despite the mouse functionality, the system would ultimately freeze, requiring a forced shutdown. This frustrating cycle persisted, with intermittent instances of Windows failing to load entirely and displaying the dreaded blue screen error message. Before this update, my PC operated without any complications. To address the boot problem, I ran troubleshooting commands such as DISM and SFC /scannow, but to no avail. Seeking assistance from Samsung Support, I inquired about re-flashing the BIOS, only to be informed that it was not possible. This contrasted with my past experiences with Asus products, where I had successfully performed BIOS flashings independently. The support team’s proposed solution was to either reinstall Windows or consider purchasing a new laptop at a discounted rate. In response to this conundrum, I have opted to acquire a new drive and will attempt to resolve the boot issue by implementing this hardware change. The underlying cause of this perplexing boot problem remains elusive, prompting further investigation into potential factors contributing to this ordeal. Read More
Troubleshooting: My Brand New Computer Won’t Start Up
I recently acquired a new computer without an operating system. Although I have two functional drives that successfully boot on my current Windows 10 and Windows 11 system, when I installed each drive separately in the new computer, they both failed to boot properly. The Windows loading screen appears, but then the process stalls.
I attempted creating a bootable USB using Windows, and also tried using an ISO file, but neither method worked. The BIOS recognizes all the drives and the USB drive.
Before attempting another boot, I also installed a graphics card. Would removing the graphics card make a difference in booting up successfully? Any assistance on this matter would be greatly appreciated. Thank you.
I recently acquired a new computer without an operating system. Although I have two functional drives that successfully boot on my current Windows 10 and Windows 11 system, when I installed each drive separately in the new computer, they both failed to boot properly. The Windows loading screen appears, but then the process stalls. I attempted creating a bootable USB using Windows, and also tried using an ISO file, but neither method worked. The BIOS recognizes all the drives and the USB drive. Before attempting another boot, I also installed a graphics card. Would removing the graphics card make a difference in booting up successfully? Any assistance on this matter would be greatly appreciated. Thank you. Read More
Video upload issue on teams chat
Teams has video upload bug on few last releases, please fix it
Unable to upload recording
Teams has video upload bug on few last releases, please fix itUnable to upload recording Read More
Ready, Set, AI: What our People Science research tells us about AI Readiness
On July 18, the Viva People Science team held the fourth webinar in its AI Empowerment series. During this webinar, I was joined by Carolyn Kalafut (Principal People Scientist at Microsoft Viva), Megan Benzing (Viva People Science Researcher) and Craig Ramsay (Viva People Science Researcher) who have been leading a research study into what it means to be ready for AI transformation as an organization.
We talked about the key insights that emerged from the research including:
AI and the employee experience being complementary to one another, and how a positive employee experience can help to drive successful AI transformation
How leaders and individual contributors are experiencing change differently, and the blind spots that leaders need to be aware of when planning an AI rollout strategy
How High Performing Organizations are taking a much more people-centric approach to change compared to typical organizations
The need to balance the excitement and hopes that employees have about AI with their concerns around data security, over-reliance on AI and job loss
The presenters wrapped up the session with some key principles for AI transformation based on their learnings from the research study. These included recognizing the scope of AI transformation and its impact, taking an agile approach and encouraging experimentation in teams, and taking a human approach to the change by leading with empathy and addressing concerns.
We invite you to watch the recording from this session, and those from our previous events in this series below. Discover more, engage with the content, and let’s embark on this journey together.
AI Empowerment: Introducing our Viva People-Science series for HR
AI Empowerment: Preparing your organization for AI with learnings from Microsoft
Microsoft Tech Community – Latest Blogs –Read More
Looking for a solution to automate inventory management of product accessories.
Hello Excel Community!
I’m looking for some sort of formula to be able to create an additional sheet within my current Excel Book to best track the inventory of the accessories we need to sell our items. This is the current set up of our doc, with a master list that I do not currently have access to, but a coworker does, that manages all of the Data Validation for columns C-H. Each drop down has a variety of values we use regularly to manage our intake and so we can find each item if/when our Access Database crashes multiple times a day.
Effectively, I’m looking for some way to track the quantities of the bags in column D in a separate sheet, as well as the potential for 1-2 more columns for hoods and/or batteries. This will free us up a substantial amount of time instead of having to count bags each week.
I’ve tried a few other solutions I’ve found on here with minimal luck. If I have to rebuild the entire thing from scratch to make something work, I definitely am up to doing that if it’s not possible with the current setup we run.
Thank you in advance for any help!
Hello Excel Community! I’m looking for some sort of formula to be able to create an additional sheet within my current Excel Book to best track the inventory of the accessories we need to sell our items. This is the current set up of our doc, with a master list that I do not currently have access to, but a coworker does, that manages all of the Data Validation for columns C-H. Each drop down has a variety of values we use regularly to manage our intake and so we can find each item if/when our Access Database crashes multiple times a day. Effectively, I’m looking for some way to track the quantities of the bags in column D in a separate sheet, as well as the potential for 1-2 more columns for hoods and/or batteries. This will free us up a substantial amount of time instead of having to count bags each week.I’ve tried a few other solutions I’ve found on here with minimal luck. If I have to rebuild the entire thing from scratch to make something work, I definitely am up to doing that if it’s not possible with the current setup we run. Thank you in advance for any help! Read More
New Outlook closes shortly after start
I’m trying to use the new outlook, but as soon as I open the application it closes and if I try to reopen it doesn’t work.
I’m trying to use the new outlook, but as soon as I open the application it closes and if I try to reopen it doesn’t work. Read More
Search in Viva Connections rapidly slowed down
Hi,
first of all, I’m not sure if I’m in the right place, so if not, i really apalogise if this does not belong here.
Our company helped one of our customers do deploy Viva Connections.
It mostly worked fine, but then one issue appeard. They have two cards there – Search for users and Search through intranet. What happened is that when anyone tries to search for anything, it takes between 20 and 30 seconds to load. It is supposed, and it was, load within 5 seconds or so.
I haven’t find what could cause the issue since it looks like that’s the only appear of this kind of an issue.
Have anyone seen anything like this?
Thank you
Hi,first of all, I’m not sure if I’m in the right place, so if not, i really apalogise if this does not belong here.Our company helped one of our customers do deploy Viva Connections. It mostly worked fine, but then one issue appeard. They have two cards there – Search for users and Search through intranet. What happened is that when anyone tries to search for anything, it takes between 20 and 30 seconds to load. It is supposed, and it was, load within 5 seconds or so.I haven’t find what could cause the issue since it looks like that’s the only appear of this kind of an issue.Have anyone seen anything like this? Thank you Read More