Month: September 2024
Microsoft names new Executive Vice President and Chief Operations Officer
Satya Nadella, Chairman and CEO, shared the below communication with Microsoft employees this morning.
As we mark our 50th year, I’ve been reflecting on how we have remained a consequential company decade after decade in an industry where there is no franchise value. And it is because — time and time again when tech paradigms have shifted — we have seized the opportunity to reinvent ourselves. And that’s what we are doing again today in this AI platform shift.
Carolina Dybeck Happe
To continue thriving as a company, we need to raise the bar on our operational excellence, continually improving security, quality, and delivery to our customers, as well as the rigor with which we operate the business. Building this capability is essential, and I want each of us to take as much pride in exceeding customer expectations in our fundamentals as we do in our product innovation. After all, both are mission critical to our customers and our future.
In this context, I’m thrilled to share that Carolina Dybeck Happe is joining Microsoft as EVP and Chief Operations Officer. In this newly created role, she will join the senior leadership team (SLT), reporting to me.
I’ve come to admire Carolina through her work as a global business leader, including most recently her role in leading GE’s historic turnaround. She is recognized for her ability to drive transformational change at scale while delivering improved customer experiences and faster time to value. Carolina will partner with the SLT to help us drive continuous business process improvement across all our organizations and accelerate our company-wide AI transformation, increasing value to customers and partners.
As part of this transition, the Commerce + Ecosystems organization in Cloud + AI, the Microsoft Digital organization in Experiences + Devices, and the Microsoft Business Operations organization in Finance will move to report to Carolina. These teams are doing mission-critical work for us with high ambition plans on how to empower our partners, customers, and employees with world class technology and experiences.
I look forward to seeing the progress we will achieve together as we embrace continuous improvement in all we do.
Please join me in welcoming Carolina to Microsoft.
Satya
The post Microsoft names new Executive Vice President and Chief Operations Officer appeared first on The Official Microsoft Blog.
Satya Nadella, Chairman and CEO, shared the below communication with Microsoft employees this morning. As we mark our 50th year, I’ve been reflecting on how we have remained a consequential company decade after decade in an industry where there is no franchise value. And it is because — time and time again when tech paradigms…
The post Microsoft names new Executive Vice President and Chief Operations Officer appeared first on The Official Microsoft Blog.Read More
Looking for help with labelling groups
Hey everyone,
so far I’ve read all Matlab entrys on adding/creating categories and/or labels but I haven’t found something fitting that works for me. I have a set of data in 178×13 double format, the variable names in a 1×13 cell format and the group classification in a 178×1 double, which ist just 1 2 and 3. In different projects I had a 178×1 cell instead containing the names of the groups and I would use that to generate a double using findgroups. However, now I would like to do the reverse action and generate a 178×1 cell value with the names instead of numbers. Num2Cell does not work here of course because I would like to tell Matlab to put "GroupA" for every 1 in the group classification 178×1 double and "GroupB" for every 2 and GroupC for every 3. I have information that the first 59 belog to GroupA etc, but I don’t know how to implement that either. Maybe it’s the language barrier but I really don’t even know what to search for anymore. Any help is much appreciated!Hey everyone,
so far I’ve read all Matlab entrys on adding/creating categories and/or labels but I haven’t found something fitting that works for me. I have a set of data in 178×13 double format, the variable names in a 1×13 cell format and the group classification in a 178×1 double, which ist just 1 2 and 3. In different projects I had a 178×1 cell instead containing the names of the groups and I would use that to generate a double using findgroups. However, now I would like to do the reverse action and generate a 178×1 cell value with the names instead of numbers. Num2Cell does not work here of course because I would like to tell Matlab to put "GroupA" for every 1 in the group classification 178×1 double and "GroupB" for every 2 and GroupC for every 3. I have information that the first 59 belog to GroupA etc, but I don’t know how to implement that either. Maybe it’s the language barrier but I really don’t even know what to search for anymore. Any help is much appreciated! Hey everyone,
so far I’ve read all Matlab entrys on adding/creating categories and/or labels but I haven’t found something fitting that works for me. I have a set of data in 178×13 double format, the variable names in a 1×13 cell format and the group classification in a 178×1 double, which ist just 1 2 and 3. In different projects I had a 178×1 cell instead containing the names of the groups and I would use that to generate a double using findgroups. However, now I would like to do the reverse action and generate a 178×1 cell value with the names instead of numbers. Num2Cell does not work here of course because I would like to tell Matlab to put "GroupA" for every 1 in the group classification 178×1 double and "GroupB" for every 2 and GroupC for every 3. I have information that the first 59 belog to GroupA etc, but I don’t know how to implement that either. Maybe it’s the language barrier but I really don’t even know what to search for anymore. Any help is much appreciated! groups, classes, label MATLAB Answers — New Questions
大学とMathworks間のライセンスでmatlabをserver上で動作させていましたが、突然、ライセンスexpireとのmessage(License Manager Error -10)が出て動作しません。
東京大学ーMathworks間のライセンスの下で私のアカウントで稼働していたmatlabが急に以下のmessageが出て動かなくなりました。
(anaconda3-2022.05)sekigh@1de7b0766802:/external_disk_3/container_3/home/sekigh/Windowsfolder$ matlab
License checkout failed.
License Manager Error -10
Your license for MATLAB has expired.
If you are not using a trial license contact your License Administrator to obtain an updated license.
Otherwise, contact your Sales Representative for a trial extension.
Troubleshoot this issue by visiting:
https://www.mathworks.com/support/lme/10
Diagnostic Information:
Feature: MATLAB
License path: /home/sekigh/.matlab/R2023a_licenses/license_1de7b0766802_40790257_R2023a.lic:/usr/local/MATLAB/R2023a/licenses/license.dat:/usr/local/MATLAB/R2023a/licenses
Licensing error: -10,32.
(anaconda3-2022.05)sekigh@1de7b0766802:/external_disk_3/container_3/home/sekigh/Windowsfolder$
このserver以外でも私のアカウントでmatlabが2台installされており、それは正常に稼働しています。ライセンスをactivateする方法を教授ください。東京大学ーMathworks間のライセンスの下で私のアカウントで稼働していたmatlabが急に以下のmessageが出て動かなくなりました。
(anaconda3-2022.05)sekigh@1de7b0766802:/external_disk_3/container_3/home/sekigh/Windowsfolder$ matlab
License checkout failed.
License Manager Error -10
Your license for MATLAB has expired.
If you are not using a trial license contact your License Administrator to obtain an updated license.
Otherwise, contact your Sales Representative for a trial extension.
Troubleshoot this issue by visiting:
https://www.mathworks.com/support/lme/10
Diagnostic Information:
Feature: MATLAB
License path: /home/sekigh/.matlab/R2023a_licenses/license_1de7b0766802_40790257_R2023a.lic:/usr/local/MATLAB/R2023a/licenses/license.dat:/usr/local/MATLAB/R2023a/licenses
Licensing error: -10,32.
(anaconda3-2022.05)sekigh@1de7b0766802:/external_disk_3/container_3/home/sekigh/Windowsfolder$
このserver以外でも私のアカウントでmatlabが2台installされており、それは正常に稼働しています。ライセンスをactivateする方法を教授ください。 東京大学ーMathworks間のライセンスの下で私のアカウントで稼働していたmatlabが急に以下のmessageが出て動かなくなりました。
(anaconda3-2022.05)sekigh@1de7b0766802:/external_disk_3/container_3/home/sekigh/Windowsfolder$ matlab
License checkout failed.
License Manager Error -10
Your license for MATLAB has expired.
If you are not using a trial license contact your License Administrator to obtain an updated license.
Otherwise, contact your Sales Representative for a trial extension.
Troubleshoot this issue by visiting:
https://www.mathworks.com/support/lme/10
Diagnostic Information:
Feature: MATLAB
License path: /home/sekigh/.matlab/R2023a_licenses/license_1de7b0766802_40790257_R2023a.lic:/usr/local/MATLAB/R2023a/licenses/license.dat:/usr/local/MATLAB/R2023a/licenses
Licensing error: -10,32.
(anaconda3-2022.05)sekigh@1de7b0766802:/external_disk_3/container_3/home/sekigh/Windowsfolder$
このserver以外でも私のアカウントでmatlabが2台installされており、それは正常に稼働しています。ライセンスをactivateする方法を教授ください。 ライセンス エクスパイア MATLAB Answers — New Questions
Control quiver arrow head width vs height.
Hi, I have a quiver plot where the arrow heads are incredibly wide by default, so I used "MaxHeadSize" to shrink them to a reasonable size. Also, the scaling behavior is set to 0 because I want them to connect the way they do currently. The downside is that now, instead of looking like arrowheads, they almost look like horizontal bars. I’m wondering if there is a way to control the head length independently of the head width.Hi, I have a quiver plot where the arrow heads are incredibly wide by default, so I used "MaxHeadSize" to shrink them to a reasonable size. Also, the scaling behavior is set to 0 because I want them to connect the way they do currently. The downside is that now, instead of looking like arrowheads, they almost look like horizontal bars. I’m wondering if there is a way to control the head length independently of the head width. Hi, I have a quiver plot where the arrow heads are incredibly wide by default, so I used "MaxHeadSize" to shrink them to a reasonable size. Also, the scaling behavior is set to 0 because I want them to connect the way they do currently. The downside is that now, instead of looking like arrowheads, they almost look like horizontal bars. I’m wondering if there is a way to control the head length independently of the head width. quiver, matlab, graph MATLAB Answers — New Questions
Folder sharing link – setting expiry date
Hi,
I have a SharePoint site with a number of folders that have view only access configured at folder level for a mix of internal and external users.
All of these links had expiry dates set on them and this has been working fine for the last couple of months.
Today, I created a new folder and the ability to set expiry dates for the link seems to have disappeared. (I’m using the “only specific users” option when creating the link).
Previously I would click share next to the folder name, then on the gear icon, and select a date. The option has gone.
Not only that, but on all the previous access links I’ve created on all sites I can no longer see that they have an expiry date.
There’s normally a small calendar icon next to the link and that has now disappeared for all folders across all sites.
At the moment I’m assuming this is following an update having been applied or a global setting having been changed somewhere.
I even tried creating a new, clean SharePoint site but the problem still occurs on the new site.
Can anyone advise on what may have caused the expiry date option to disappear when creating sharing links?
Hi,I have a SharePoint site with a number of folders that have view only access configured at folder level for a mix of internal and external users. All of these links had expiry dates set on them and this has been working fine for the last couple of months. Today, I created a new folder and the ability to set expiry dates for the link seems to have disappeared. (I’m using the “only specific users” option when creating the link). Previously I would click share next to the folder name, then on the gear icon, and select a date. The option has gone. Not only that, but on all the previous access links I’ve created on all sites I can no longer see that they have an expiry date. There’s normally a small calendar icon next to the link and that has now disappeared for all folders across all sites. At the moment I’m assuming this is following an update having been applied or a global setting having been changed somewhere. I even tried creating a new, clean SharePoint site but the problem still occurs on the new site. Can anyone advise on what may have caused the expiry date option to disappear when creating sharing links? Read More
Teams Voice – business recovery plan
Hello,
I have a mandate to draw up a business recovery plan. We use Teams phones with domestic calling licenses and Microsoft is our service provider.
I can’t find any information about vendor/customer responsibility for the Voice portion of Teams. I assume that as a SAAS, Microsoft is responsible for offering the service, but what about configurations such as Phone numbers, auto attendants, call queues, holidays, resource accounts?
Regards
Eric
Hello,I have a mandate to draw up a business recovery plan. We use Teams phones with domestic calling licenses and Microsoft is our service provider. I can’t find any information about vendor/customer responsibility for the Voice portion of Teams. I assume that as a SAAS, Microsoft is responsible for offering the service, but what about configurations such as Phone numbers, auto attendants, call queues, holidays, resource accounts? Regards Eric Read More
WebDav installation issue with IIS
Hi! I have failed installation IIS for Web Server on virtual machine windows server 2016, that have connection via proxy and I connected to it via xfreerdp.
What it can be?
Hi! I have failed installation IIS for Web Server on virtual machine windows server 2016, that have connection via proxy and I connected to it via xfreerdp.What it can be? Read More
Received email: Immediate Action Required: Update your sign-in technology before September 16th, 202
I’m not sure of:
1. Is this a legit email?
2. I’m not a tech. Pls use plain English.
3. How do I know I’m using 3rd party stuff. What’s a third party? I tap on my email icon on iPhone and get my Hotmail emails. I also use a pc and get my hotmail email through that sometimes (link says Outlook.live.com, but my address is hotmail.com). Is it outlook email or hotmail or what?
4. Which one is going to be affected and how do I make it work? In plain English, not in tech terms please! 🙏🏼I’m extremely lost because all search I’ve made so far comes up with tech language, acronyms, download this, and that, but all in terms assuming a person has extended tech knowledge.
Please help.
Chief Noel…
I’m not sure of:1. Is this a legit email?2. I’m not a tech. Pls use plain English.3. How do I know I’m using 3rd party stuff. What’s a third party? I tap on my email icon on iPhone and get my Hotmail emails. I also use a pc and get my hotmail email through that sometimes (link says Outlook.live.com, but my address is hotmail.com). Is it outlook email or hotmail or what?4. Which one is going to be affected and how do I make it work? In plain English, not in tech terms please! 🙏🏼I’m extremely lost because all search I’ve made so far comes up with tech language, acronyms, download this, and that, but all in terms assuming a person has extended tech knowledge.Please help.Chief Noel… Read More
Month Year as an ID
Hi Experts,
I have been fiddling around with this query for quite some time. I am getting nowhere with it. What I have resorted to is trying to use MonthYearSort in a Where clause in the following OpeningBalance calculation (I typically would use an ID but I dont have one. I dont know if this is a good idea…probably not however it is unique but just not a typical ID since its based on a date). Not sure if some special kind of formatting would need to be used since it is based on a date. I am grabbing a sum from another query and summing by month based on the MonthYearSort.
OpeningBal: format(Nz(DSum(“SumOfAmount”,”qryDrawsDecliningCumDrawn”,”Type=” & [tblDraws].[Type] & ” And [MonthYearSort] < ” & Nz([qryDrawsDecliningCumDrawn].[MonthYearSort],0)),0),”Currency”)
MonthYrSort: Year([FundingDate])*12+DatePart(‘m’,[FundingDate])-1
thank you.
Hi Experts,I have been fiddling around with this query for quite some time. I am getting nowhere with it. What I have resorted to is trying to use MonthYearSort in a Where clause in the following OpeningBalance calculation (I typically would use an ID but I dont have one. I dont know if this is a good idea…probably not however it is unique but just not a typical ID since its based on a date). Not sure if some special kind of formatting would need to be used since it is based on a date. I am grabbing a sum from another query and summing by month based on the MonthYearSort. OpeningBal: format(Nz(DSum(“SumOfAmount”,”qryDrawsDecliningCumDrawn”,”Type=” & [tblDraws].[Type] & ” And [MonthYearSort] < ” & Nz([qryDrawsDecliningCumDrawn].[MonthYearSort],0)),0),”Currency”)MonthYrSort: Year([FundingDate])*12+DatePart(‘m’,[FundingDate])-1 thank you. Read More
Analysis Services Community
Hi everyone,
I’m not sure if this is the right place for this question, but I can’t find a specific community for SSAS projects.
Anyway, I hope you can help me with this doubt:
I have a Tabular project using Visual Studio and SQL Server Analysis Services (SSAS), with the aim of viewing the data in a Power BI report in Direct Query mode. My question is, why do I have to process the table in Visual Studio, deploy the solution, and then process it again in SQL to see the data refreshed in Power BI when I make changes to the project?
Is there a more efficient way to handle this? Why do I need to process the table twice—once in Visual Studio and once again in SQL after deploying?
For example: I changed the data type of a column in my table, and I don’t see the change until I go through the previous steps I detailed.
Hi everyone,I’m not sure if this is the right place for this question, but I can’t find a specific community for SSAS projects. Anyway, I hope you can help me with this doubt: I have a Tabular project using Visual Studio and SQL Server Analysis Services (SSAS), with the aim of viewing the data in a Power BI report in Direct Query mode. My question is, why do I have to process the table in Visual Studio, deploy the solution, and then process it again in SQL to see the data refreshed in Power BI when I make changes to the project?Is there a more efficient way to handle this? Why do I need to process the table twice—once in Visual Studio and once again in SQL after deploying? For example: I changed the data type of a column in my table, and I don’t see the change until I go through the previous steps I detailed. Read More
Forms Attachment to SharePoint with a twist
Hello, I have a form that I use to collect responses in relation to accident reports. I have a flow set up to add them to a sharepoint list. I used to have the attachment working when I was the owner of the form because they would save to my OneDrive folder. These pictures are usually for injuries so I dont want them on my drive. I have transferred the ownership of the form to the actual SharePoint site where the list is held. The attachments are now going into a folder on SharePoint under Documents > Apps >etc.. Instead of my OneDrive > Apps > etc.. I tried the Sharepoint Get File Content but I cannot get it to find the file I need.
Hello, I have a form that I use to collect responses in relation to accident reports. I have a flow set up to add them to a sharepoint list. I used to have the attachment working when I was the owner of the form because they would save to my OneDrive folder. These pictures are usually for injuries so I dont want them on my drive. I have transferred the ownership of the form to the actual SharePoint site where the list is held. The attachments are now going into a folder on SharePoint under Documents > Apps >etc.. Instead of my OneDrive > Apps > etc.. I tried the Sharepoint Get File Content but I cannot get it to find the file I need. Read More
Elastic pools for Azure SQL Database Hyperscale now Generally Available!
We are very pleased to announce General Availability (GA) for Azure SQL Database Hyperscale elastic pools (“Hyperscale elastic pools”).
Why use Hyperscale elastic pools?
Azure SQL Database is the preferred database technology for hundreds of thousands of customers. Built on top of the rock-solid SQL Server engine and leveraging leading cloud-native architecture and technologies, Azure SQL Database Hyperscale offers leading performance, scalability and elasticity with one of the lowest TCO in the industry .
While you may start with a standalone Hyperscale database, chances are that as your fleet of databases grows, you want to optimize price and performance across a set of Hyperscale databases. Elastic pools offer the convenience of pooling resources like CPU, memory, IO, while ensuring strong security isolation between those databases.
Here’s an example showing 8 standalone databases, each with an individually variable workload. Each database tends to spike in resource consumption at different points in time. Each database must therefore be allocated adequate resources (CPU, data IO, log IO, etc.) to accommodate the individual peak resource requirement. Accordingly, the total cost of these databases is directly proportional to the number of databases, while the average utilization of each database is low.
vCore configuration shown is for demonstration purposes. Prices as of Sep 12, 2024, and only represent compute costs for Azure East US. Storage costs are extra and are billed per database. Actual configuration depends on workload profiles and performance requirements.
With the shared resource model offered by Hyperscale elastic pools, the aggregate performance requirements of the workloads become much “smoother”, as seen in the white line chart. Correspondingly, the elastic pool only needs to be provisioned for the maximum combined resource requirement. This way, overall cost is lower, and average resource utilization is much higher than the standalone database scenario.
vCore configuration shown is for demonstration purposes. Prices as of Sep 12, 2024, and only represent compute costs for Azure East US. Savings are indicative. Storage costs are extra and billed per database. Actual configuration depends on workload profiles and performance requirements.
Customer feedback
For many customers, elastic pools are an essential tool to stay competitive from price and performance perspectives. Adam Wiedenhaefer, Principal Data Architect, Covetrus Global Technology Solutions, says:
“Elastic pools on Azure SQL Database Hyperscale has provided us a solid blend between performance, storage, and overall flexibility. This allows us to scale our systems in ways we couldn’t before at a price point that reduces overall costs, savings we can pass on to Covetrus customers.“
The cloud-native architecture for Hyperscale elastic pools enables independent scaling of compute and storage in a fast and predictable manner. This allows customers to perfectly optimize their compute resources while relying on auto-scaling storage, which provides hands-off scalability and great performance as their databases grow. Nick Olsen, CTO, ResMan says:
“We have been users of Azure SQL Database elastic pools for over a decade now and have loved the ability to share resources amongst many databases. Our applications are such that only a few databases reach peak utilization simultaneously but we need to allow any given database to consume quite a bit of resources when bursts occur. As our requirements evolved, we found that we needed to go beyond the resource limits of our existing pools while controlling for the amount of time it would take to scale very large pools. The introduction of elastic pools in Azure SQL Database Hyperscale introduced much higher limits on pool size and the ability to scale in constant time, regardless of the size of the workload. We are now able meet the evolving needs of our business while allowing us to achieve greater cost savings than we have had in the past.”
Throughout public preview, we have received overwhelmingly positive feedback from several customers about the superb reliability, great performance and scalability, and the value for money that Hyperscale elastic pools have provided. Many customers are already running production systems on Hyperscale elastic pools since public preview.
Availability
During a very successful public preview, we have seen tremendous adoption from many customers and have addressed top customer requests and improvements including:
Zone redundancy for Hyperscale elastic pools.
Premium-series (PRMS / MOPRMS) hardware for Hyperscale elastic pools.
Reserved capacity for these premium-series hardware options.
Maintenance window support for Hyperscale elastic pools.
All these capabilities for Hyperscale elastic pools are also Generally Available (GA) starting 12 September 2024. Hyperscale elastic pools are available in all supported Azure regions including US Government regions and Azure China.
Pricing
With GA we are also adjusting the pricing for Hyperscale elastic pools. Starting 12 September 2024, we will begin charging an additional $0.05 / vCore / hour for each Hyperscale elastic pool, compared to the preview price. The additional charge will not be eligible for reserved capacity discounts and will apply to the primary pool replica and any secondary pool replicas configured. The final pricing is visible in the Azure portal, Azure Pricing calculator and on the Azure SQL Database pricing page.
Resources
Learn more about the architecture of Hyperscale elastic pools.
[Docs] View examples of PowerShell and Azure CLI commands for managing Hyperscale elastic pools.
[Docs] Review supported capabilities and resource limits for Hyperscale elastic pools.
Our team is here to assist with any questions you may have. Please leave a comment on this blog and we’ll be happy to get back to you. Alternatively, you can also email us at sqlhsfeedback AT microsoft DOT com. We are eager to hear from you all!
Microsoft Tech Community – Latest Blogs –Read More
From Feedback to Action: How Viva Pulse Empowers Managers at Microsoft to Drive Employee Engagement
In fall 2023, the Viva Pulse product team collaborated with Microsoft’s HRBI (HR Business Insights) team to develop a strategy for continuous listening using Viva Pulse. We designed a comprehensive employee data collection approach that includes the existing semi-annual Viva Glint engagement survey, supplemented by new, manager-driven Viva Pulse surveys in the interim.
“We leverage Viva Pulse as a strategic tool within our broader Employee Listening System. It provides special value between large survey cycles to track progress and drive action taking, and the opportunities are endless as this tool flexibly enables our managers/project leads democratized access to robust surveying functionality.” – Dante Myers, Director of Employee Listening Systems
The HRBI team created an organization-specific pulse survey template tailored for managers at Microsoft. This custom template was designed to help managers regularly check in on their team’s progress on the focus areas identified from the previous Viva Glint engagement survey cycle and to make any necessary adjustments to their action plans. Six weeks after the release of the Viva Glint engagement survey results, in early 2024, the HRBI team sent an email to all managers, encouraging them to use Viva Pulse to act on their Viva Glint engagement survey feedback.
Following the communication, Microsoft conducted a research study with managers who used Viva Pulse to learn more about the effectiveness of the tool to support actioning. From the study, the manager participants highlighted the importance of acting on survey feedback to build trust and demonstrate that leadership listens. They emphasized the value of regular surveys to provide leaders with ongoing insights and maintain connection with their teams in between survey cycles. Additionally, they see surveys as a tool to reinforce organizational culture, unify the team, and communicate that employee opinions are valued and impactful.
When the next Viva Glint engagement survey was released, the Research team re-connected with some of the manager participants to follow up on their Viva Pulse experience and any impact it may have had on their Viva Glint engagement survey experience. The Managers noted a significant increase in response rate in their interim Pulses and Glint engagement survey.
One manager participant was unable to review his direct team’s fall 2023 Viva Glint engagement survey scores as responses were too low. This round – 88% responded. He noted it was because of the continuous listening and action taking. He said, “I’m just taking the time from those Pulses and bringing them up on team calls and immediately sharing the results. The higher result happened; I believe [it is] from people being more familiar & comfortable, developing that muscle with the process.”
The manager participants we spoke to attribute these response improvements to Pulse building trust within respondents, further emphasizing that their feedback is being heard, and action is being taken. Additionally, these manager participants took the time to process the Pulse data after each survey, share it with their team, and create an environment for discussion, which we believe is crucial to closing the feedback loop.
Another manager participant noted, “Pulse is a great way for people to know that leadership is doing [things] or trying to do [things]….It signals that we’re not just doing this because [our CEO] wants us to do all these surveys and check sentiment…No, we’re doing this independently.” Managers appreciated the opportunity to seek feedback directly, instead of waiting for the next engagement survey to receive sentiment from their teams. They found Viva Pulse’s stock templates and question library particularly useful as a starting point, and by customizing the survey to match their team’s needs and focus areas, the managers were able to gather insights during critical moments.
As this initial experiment concludes, the Viva Pulse team will continue to invest in Viva Pulse as a follow up to engagement surveys. With deeper integrations planned between Viva Pulse and Viva Glint, we aim to support manager enablement in driving employee engagement, culture, and inclusion. Our goal is to develop Viva Pulse in a way that empowers managers to take ownership of driving employee engagement.
Thank you for reading! If you have any questions or feedback, please don’t hesitate to comment. Furthermore, if you would like to get involved with Viva Pulse, please comment below to learn about our customer engagement opportunities.
Until Next Time,
Viva Pulse Team
Microsoft Tech Community – Latest Blogs –Read More
Think like a People Scientist: How Microsoft used Viva Insights to understand organizational change
On September 11 we took a deep dive into the transformative power of Viva Glint, People Science and Viva Insights in understanding and driving organizational change. I was joined by our Viva People Scientists Keith Mcgrane, Beth Demko and Jennifer Stoll, and Todd Crutchfield (Principal Data Science Manager at Microsoft). The session unpacked the nuances of employee sentiment, organizational data, and the innovative use of Organizational Network Analysis (ONA) within Viva Insights.
Key points discussed during this webinar:
People Science and Change: Jennifer and Beth kicked off by exploring the People Science perspective behind organizational change, focusing on employee sentiment and the theories that guide our understanding during organizational transitions.
Introducing Viva Insights and Organizational Network Analysis: Keith introduced the concept of adding organizational data into your employee listening strategy. He spoke about how Viva Insights leverages organizational data to complement employee sentiment, with a special focus on ONA’s role in understanding and facilitating change.
Microsoft’s Use Case: Todd added a practical example from Microsoft, demonstrating how ONA was used internally to assess the impact of organizational restructuring on employee collaboration.
This session was a reminder of the power of data in understanding and navigating the human aspects of organizational change. It also highlighted the combined value of employee sentiment and organizational data to leaders during times of change.
We invite you to watch the recording and access the slides and other useful resources from this event below.
Read more about the Microsoft ONA use case HERE.
Learn more about our perspective on Holistic Listening HERE.
Microsoft Tech Community – Latest Blogs –Read More
High CPU Consumption in IIS Worker Processes
Context:
High CPU consumption in IIS worker processes (w3wp.exe) can significantly impact the performance of your web applications. Here we will discuss available tools to identifying symptoms, initial troubleshooting steps and data collection methods.
Symptoms:
When IIS worker processes consume high CPU, you may notice:
Slow response times or timeouts in web applications.
High memory usage accompanying the high CPU.
Overall server performance degradation.
Initial isolation:
Troubleshooting high CPU involves two main tasks: Issue identification, Log Collection and Analysis. Here are some isolation to help you get started with IIS worker process:
Is high CPU accompanied by high memory usage or slowness? Observe the memory consumption along with CPU consumption.
Describe the steps leading to high CPU usage. Dose the CPU consumptions increase with load?
What is the CPU consumption by w3wp.exe alone and the total CPU consumption on the server during the issue? If overall CPU is high, then there may be an issue with server performance.
How does CPU consumption change over time? If this spikes up and down then should be as per the code execution. Monitor and if stays for 5-10 seconds, then we have a scenario to troubleshoot further.
How often does the problem occur and how high does the CPU get? Observe any pattern or specific time. May be any scheduled task is executing or load increases.
Data Collection
For .NET Core applications hosted on IIS, the data collection method depends on whether the app is in-process or out-of-process. In-process mode is recommended over out-of-proc.
Caution 1: For CPU analysis, either ETW trace or memory dumps can be useful. Avoid collecting both simultaneously to prevent additional CPU load or slowness.
ETW Trace vs. Memory Dumps:
ETW Trace: Useful in production scenarios with minimal performance impact. Captures specific events and performance counters.
Memory Dumps: Provides an in-depth view of objects, their values, and roots. Useful for detailed analysis.
Caution 2: Avoid collecting logs at very high CPU values (>=95%) as the server might hang or become unresponsive. Collect logs before CPU reaches that level.
The initial goal is to get logs that enable us to watch the operations on the same non-waiting thread(s) over a portion of the problematic time period where w3wp.exe CPU is highest. Therefore:
Multiple dumps are needed (3 is usually a good number).
Dumps should be taken of the same process ID.
Dumps should be close enough in time (usually 10 seconds apart).
Collect dumps within a CPU usage that is considered high and abnormal.
ETW Steps – Perfview
We will use the Perfview tool to collect traces.
Download Perfview from here.
Run PerfView.exe as admin.
During the occurrence of the issue:
Click the Collect Menu and select Collect option.
Check Zip, Merge, and Thread Time checkboxes.
Expand the Advanced Options tab and select the IIS checkbox.
Start collection and stop once the issue is produced.
Note: Do not capture this trace for too long as data gets overwritten once the circular MB limit is reached.
Memory Dumps Steps
We will use either Procdump.exe or DebugDiag Collection tool. Procdump is preferred when installation of applications is not possible.
Caution 3: Do not use Task Manager to collect dumps. Specialized tools like DebugDiag and Procdump provide more detailed information and handle process bitness correctly.
Option 1: Procdump
Download Procdump.exe from here.
In an elevated command prompt, go to the directory where you have extracted the downloaded procdump.zip and run the command:
procdump.exe -c w3wpCPUConsumptionTrigger -s NumOfSeconds -ma -n NumberOfDumpsToCollect PID
-c: The w3wp.exe CPU consumption threshold. Replace w3wpCPUConsumptionTrigger with threshold say 80%.
-s: Number of consecutive seconds the CPU consumption must be above the threshold. Replace NumOfSeconds with no of seconds CPU stays above the threshold, say 10 seconds.
-ma: Full memory dump.
-n: Number of dumps to collect. Replace NumberOfDumpsToCollect with number of dumps to collected. Usually this is set as 3 for conclusive analysis.
PID is the process ID, you can find it from Task manager or IIS Worker Processes.
Command example : procdump.exe -c 80 -s 10 -ma -n 3 1234
Option 2: DebugDiag
Install DebugDiag from here.
Open DebugDiag Collection from the start menu.
Change the path for dumps if needed via Tools > Options And Settings > Manual Userdump Save Folder.
Click the “Processes” tab.
Locate the w3wp process by its Process ID.
Right-click the process and select “Create Userdump Series”, then set the options.
Click “Save and close” to start generating dumps.
Finally: By following these steps, you can effectively isolate and collect necessary data for high CPU consumption issues in IIS worker processes. If you are comfortable analyzing this data then you should be able to draw insights from Perfview an Dumps. If you want us to do that, please contact Microsoft IIS Support with a case and we will do it for you.
Microsoft Tech Community – Latest Blogs –Read More
Quick Items to Review for a Smooth Microsoft 365 Copilot Deployment
As Microsoft 365 Copilot continues to transform the way people work, it is more important than ever to have a governance structure in place for your data. This guide will help you ensure a smooth deployment and maximize the benefits of Microsoft 365 Copilot as quickly as possible.
Archive or Purge Stale Data and Sites
Purging or archiving data from SharePoint that is no longer being used is the cornerstone of any data governance strategy. The links below highlight some of the native tools in Microsoft 365 to help manage stale content.
Use Expiration policies for Microsoft 365 Groups so users can easily participate in the lifecycle of their data.
Set expiration for Microsoft 365 groups – Microsoft Entra ID | Microsoft Learn
Create Lifecycle policies to identify inactive sites (SharePoint Premium required).
Manage site lifecycle policies – SharePoint in Microsoft 365 | Microsoft Learn
Configure Retention Policies to automatically expire and purge old content
Configure Microsoft 365 retention settings to automatically retain or delete content | Microsoft Learn
Configure Microsoft 365 Archive for sites that are no longer being used, but need to be preserved for compliance or other reasons.
Manage Microsoft 365 Archive – Microsoft 365 Archive | Microsoft Learn
Review Sharing Settings
Below are settings a SharePoint administrator can configure to manage how users can share content with others.
If enabled, configure expiration for Anyone Links to ensure they are removed automatically.
Review Default Link Type to help guide users to the most appropriate experience for your organization.
Use sensitivity labels and configure a “default” in SharePoint to automate protection settings.
Configure a default sensitivity label for a SharePoint document library | Microsoft Learn
Find Overshared Data
Configure Purview policies to look for and restrict access to potentially overshared files. Typical policies for Microsoft 365 Copilot will look for:
Passwords
API keys
Personal information
What DLP policy templates include
Data loss prevention policy tip reference for SharePoint
Review reports for content shared with “Anyone” or “People in the organization” (SharePoint Premium required)
Data access governance reports for SharePoint sites
Manage SharePoint Search
Consider Restricted SharePoint Search as a temporary solution as you finish auditing and cleaning up permissions. Restricted Search will prevent users from discovering content that they have not interacted with, see the link below for details.
Restricted SharePoint Search
If Applicable – remove sites from SharePoint Search index.
Semantic Index for Copilot
By following the steps outlined in this guide, you’ll be well-prepared to leverage the full potential of Microsoft 365 Copilot. These essential reviews will help you streamline your data governance processes and ensure a smooth deployment, without impacting collaboration.
Microsoft Tech Community – Latest Blogs –Read More
How to search array in array same values?
Hello,
I create an array below;
bigArray = rand(1,150);
bigArray(1,15:19) = [1 1 1 1 1]’;
bigArray(1,25:29) = [1 1 1 1 1]’;
bigArray(1,75:79) = [1 1 1 1 1]’;
bigArray(1,105:109) = [1 1 1 1 1]’;
bigArray(1,65) = 1;
bigArray(1,5:6) = [1 1]’;
I want to find [1 1 1 1 1]’ array indexes. But I run the code;
idx = find(ismember(bigArray,[1 1 1 1 1]’))
I want to see as an output; [15 16 17 18 19 25 26 27 28 29 75 76 77 78 79 105 106 107 108 109]Hello,
I create an array below;
bigArray = rand(1,150);
bigArray(1,15:19) = [1 1 1 1 1]’;
bigArray(1,25:29) = [1 1 1 1 1]’;
bigArray(1,75:79) = [1 1 1 1 1]’;
bigArray(1,105:109) = [1 1 1 1 1]’;
bigArray(1,65) = 1;
bigArray(1,5:6) = [1 1]’;
I want to find [1 1 1 1 1]’ array indexes. But I run the code;
idx = find(ismember(bigArray,[1 1 1 1 1]’))
I want to see as an output; [15 16 17 18 19 25 26 27 28 29 75 76 77 78 79 105 106 107 108 109] Hello,
I create an array below;
bigArray = rand(1,150);
bigArray(1,15:19) = [1 1 1 1 1]’;
bigArray(1,25:29) = [1 1 1 1 1]’;
bigArray(1,75:79) = [1 1 1 1 1]’;
bigArray(1,105:109) = [1 1 1 1 1]’;
bigArray(1,65) = 1;
bigArray(1,5:6) = [1 1]’;
I want to find [1 1 1 1 1]’ array indexes. But I run the code;
idx = find(ismember(bigArray,[1 1 1 1 1]’))
I want to see as an output; [15 16 17 18 19 25 26 27 28 29 75 76 77 78 79 105 106 107 108 109] matlab, array, find, indexing MATLAB Answers — New Questions
Create a generic x for (x,y) plot
I’m not well versed in matlab at all so hopefully this post makes some sense.
I have tons of data in .txt files that I want to plot. However the only "x" data set that I have is the real life time which I can’t use in my plot(x,y) command since the numbers are 10:23:55, for example. Since the x-axis is just time,I want to make a generic x dataset with a command with which the x gets the same length as the y data sets. Hopefully someone can help me with thisI’m not well versed in matlab at all so hopefully this post makes some sense.
I have tons of data in .txt files that I want to plot. However the only "x" data set that I have is the real life time which I can’t use in my plot(x,y) command since the numbers are 10:23:55, for example. Since the x-axis is just time,I want to make a generic x dataset with a command with which the x gets the same length as the y data sets. Hopefully someone can help me with this I’m not well versed in matlab at all so hopefully this post makes some sense.
I have tons of data in .txt files that I want to plot. However the only "x" data set that I have is the real life time which I can’t use in my plot(x,y) command since the numbers are 10:23:55, for example. Since the x-axis is just time,I want to make a generic x dataset with a command with which the x gets the same length as the y data sets. Hopefully someone can help me with this plotting MATLAB Answers — New Questions
Auto respons message
Hello,
We are currently migrating to Exchange 365 as our new mailfilter. Our current mailfilter has the option to send an auto reply (for each incoming message) for choosen mailboxes.
As far as I know Exchange 365 only support the out-of-office feature to send automated messages to external persons.
Is there an option to setup automated messages as in our current mailfilter. In Exchange of maybe in 3th party tooling?
Kind regards,
Arjan
Hello, We are currently migrating to Exchange 365 as our new mailfilter. Our current mailfilter has the option to send an auto reply (for each incoming message) for choosen mailboxes.As far as I know Exchange 365 only support the out-of-office feature to send automated messages to external persons. Is there an option to setup automated messages as in our current mailfilter. In Exchange of maybe in 3th party tooling? Kind regards,Arjan Read More
Using Copilot for Excel to create a chart
Hi everyone, over the last few weeks we have had a series of posts to show you some of the things that are possible to do with Copilot in Excel. I have a table that has life expectancy figures by year for both men and women.
With so much data it is hard to visualize it so I ask Copilot:
Create a line chart of average life expectancy by year, with one line for men and another line for women
I click on the add to a new sheet button in the Copilot pane and this chart gets inserted into my workbook:
Over the coming weeks I will continue to share more examples of what you can do with Copilot in Excel.
Thanks for reading,
Microsoft Excel Team
*Disclaimer: If you try these types of prompts and they do not work as expected, it is most likely due to our gradual feature rollout process. Please try again in a few weeks.
Hi everyone, over the last few weeks we have had a series of posts to show you some of the things that are possible to do with Copilot in Excel. I have a table that has life expectancy figures by year for both men and women.
Life expectancy table with columns for Year, Sex and (Years)
With so much data it is hard to visualize it so I ask Copilot:
Create a line chart of average life expectancy by year, with one line for men and another line for women
Copilot in Excel pane with the above prompt and the returned chart showing average life expectancy by year, with one line for men and another line for women.
I click on the add to a new sheet button in the Copilot pane and this chart gets inserted into my workbook:
chart showing average life expectancy by year, with one line for men and another line for women.
Over the coming weeks I will continue to share more examples of what you can do with Copilot in Excel.
Thanks for reading,
Microsoft Excel Team
*Disclaimer: If you try these types of prompts and they do not work as expected, it is most likely due to our gradual feature rollout process. Please try again in a few weeks. Read More