Category: Microsoft
Category Archives: Microsoft
Windows resiliency: Best practices and the path forward
The broad, open nature and scale of the Windows computing ecosystem is part of what makes it a powerful and unmatched choice across the globe. The recent CrowdStrike incident underscores the need for mission-critical resiliency within every organization, and our unique ability to support the change required.
When a major incident arises, we focus on remediation, learning, and change, all while communicating transparently to our ecosystem. On Saturday, David Weston described our “first responder” approach. Since the start, we engaged over 5,000 support engineers working 24×7 to help bring critical services back online. We are providing ongoing updates via the Windows release health dashboard, where we detail remediation steps, including a signed Microsoft Recovery Tool.
Our goal is to be your trusted partner as you leverage technology and the end-to-end Microsoft stack to deliver amazing value for your workforce, your customers, and your partners. That means, when an issue arises, we immediately engage with partners and customers to dig into the details, help, learn, and evolve.
This incident shows clearly that Windows must prioritize change and innovation in the area of end-to-end resilience. These improvements must go hand in hand with ongoing improvements in security and be in close cooperation with our many partners, who also care deeply about the security of the Windows ecosystem.
Examples of innovation include the recently announced VBS enclaves, which provide an isolated compute environment that does not require kernel mode drivers to be tamper resistant, and the Microsoft Azure Attestation service, which can help determine boot path security posture. These examples use modern Zero Trust approaches and show what can be done to encourage development practices that do not rely on kernel access. We will continue to develop these capabilities, harden our platform, and do even more to improve the resiliency of the Windows ecosystem, working openly and collaboratively with the broad security community.
There is always the chance that an outage will impact an organization. Over the last few days, we’ve been on thousands of calls with organizations around the world. We’ve observed that those who were able to remediate and recover the most quickly followed a similar set of practices. We want to share those best practices with you.
Best practices to support resiliency in your organization
Have business continuity planning (BCP) and a major incident response plan (MIRP) in place. Include response and recovery best practices that outline the steps needed to get your environment back up and operating, including who to call and how to get support.
Back up data securely and often. We recommend your organization utilize cloud storage and backup solutions, as these are great options for securely accessing, sharing, and collaborating on files from anywhere. Organizations utilizing cloud storage solutions have had better experiences getting back online, as this removed barriers to simply resetting the device.
Ensure that you can restore your Windows devices quickly. A key component of resiliency in the event of an issue is to regularly create system restore points and use Windows built-in recovery options to restore devices. If you use Azure virtual machines, you can take a snapshot of your VMs. Organizations with recent restore points were able to recover more quickly from the recent CrowdStrike issue and we observed that virtualized/cloud environments were among the quickest to recover.
Utilize deployment rings. Extend safe deployment practices into your environment by creating deployment rings to manage the rollout of updates and new features. Utilize your existing device management tools to manage deployment risk using the same approach Microsoft does. Alternatively, take advantage of automated deployment with Windows Autopatch. If you are using non-Microsoft products in your environment, including antivirus solutions, ensure that they offer ring-based deployment so you can control the pace and scale for your environment. As an example, Microsoft Defender allows for custom configuration of both engine and intelligence update staging.
Use the latest Windows security defaults and enable Windows security baselines. Enable the security features that are available in Windows by default. Take advantage of Windows security baselines, which provide Microsoft-recommended, well-tested configurations based on feedback from Microsoft security engineering teams, product groups, partners, and organizations. Windows offers several built-in security features to leverage, from firewalls to encryption to biometrics, and more at the enterprise level with endpoint detection and response (EDR), data protection, vulnerability management, compliance monitoring and more.
Adopting a cloud-native approach to managing Windows devices can make it easier to deploy updates and support recovery efforts in outage scenarios. Look at ways to move away from on-premises solutions to cloud management solutions, cloud identity solutions, and ring-based deployment and update management solutions like Windows Autopatch.
Our commitment to transparency
Our focus continues to be on helping our customers recover from this incident. We will practice transparency in sharing learnings, best practices, and, eventually, more detailed discussions that include changes designed to strengthen the broader ecosystem moving forward.
Microsoft Tech Community – Latest Blogs –Read More
Ricoh finisher options are not working
Hi,
I have Ricoh IM C4510 deployed in UP. I’m using the V4 Universal Driver. When I try to print something, I can select the options for hole punching and stapling, but the print job does not go through the finishing unit to perform any of those actions. Is there a way to fix this?
Thanks
Hi, I have Ricoh IM C4510 deployed in UP. I’m using the V4 Universal Driver. When I try to print something, I can select the options for hole punching and stapling, but the print job does not go through the finishing unit to perform any of those actions. Is there a way to fix this? Thanks Read More
Source data for Document Library Details Panel
I had an end user accidently delete some documents, which I restored via the Document Library Recycle BIN. I am being told that some of the documents have not returned. I am able to see some activity in the Library Details Panel [see screenshot]. Does anyone know where I can find this source data and download it so I can see what happened on that Monday July 22nd?
I had an end user accidently delete some documents, which I restored via the Document Library Recycle BIN. I am being told that some of the documents have not returned. I am able to see some activity in the Library Details Panel [see screenshot]. Does anyone know where I can find this source data and download it so I can see what happened on that Monday July 22nd? Got to this by clicking on the i icon Read More
Microsoft EXCEL file PASSWORD RECOVERY [for Microsoft 365 MSO (Version 2407) 64-bit]
My name is Johnson M, and I’m from India.
I use Microsoft 365 on my home laptop and have stored all my details in an Excel file that I’ve been updating for a long time. In mid-May, I changed the password, thinking I’d remember it, but now I can’t recall it despite trying all possible methods.
I’ve spent over a month trying to recover the password, using various online tips, YouTube videos, and even professional data recovery services, but nothing has worked.
I regret not reaching out to the forum sooner. Can you please help me?
Thank you so much!
My name is Johnson M, and I’m from India.I use Microsoft 365 on my home laptop and have stored all my details in an Excel file that I’ve been updating for a long time. In mid-May, I changed the password, thinking I’d remember it, but now I can’t recall it despite trying all possible methods.I’ve spent over a month trying to recover the password, using various online tips, YouTube videos, and even professional data recovery services, but nothing has worked.I regret not reaching out to the forum sooner. Can you please help me?Thank you so much! Read More
Outlook Started Crashing – even in SAFE mode today.
This just started happening – I have not added any new applications or changed any settings. Everything has been working for WEEKS, Months!
Anyone else experiencing this on Windows 11?
Faulting application name: OUTLOOK.EXE, version: 16.0.17726.20160, time stamp: 0x668c527b
Faulting module name: ucrtbase.dll, version: 10.0.22621.3593, time stamp: 0x10c46e71
Exception code: 0xc0000005
Fault offset: 0x000000000005137c
Faulting process id: 0x0x2AD8
Faulting application start time: 0x0x1DADEBB287F3844
Faulting application path: C:Program FilesMicrosoft OfficerootOffice16OUTLOOK.EXE
Faulting module path: C:WindowsSystem32ucrtbase.dll
Report Id: dde66e9a-ce52-4e4b-81d0-4f88bb746329
Faulting package full name:
Faulting package-relative application ID:
This just started happening – I have not added any new applications or changed any settings. Everything has been working for WEEKS, Months! Anyone else experiencing this on Windows 11? Faulting application name: OUTLOOK.EXE, version: 16.0.17726.20160, time stamp: 0x668c527bFaulting module name: ucrtbase.dll, version: 10.0.22621.3593, time stamp: 0x10c46e71Exception code: 0xc0000005Fault offset: 0x000000000005137cFaulting process id: 0x0x2AD8Faulting application start time: 0x0x1DADEBB287F3844Faulting application path: C:Program FilesMicrosoft OfficerootOffice16OUTLOOK.EXEFaulting module path: C:WindowsSystem32ucrtbase.dllReport Id: dde66e9a-ce52-4e4b-81d0-4f88bb746329Faulting package full name:Faulting package-relative application ID: Read More
Health state monitoring for a service hosted on Azure Virtual Machine
I have a virtual machine hosted on Azure cloud, I wanted to add some monitoring on the health state of a service deployed on the VM. I already enabled the health monitoring extension to keep pinging the health URL of the service and return 200 or otherwise when the service is down. It shows green “Healthy” or yellow “Unhealthy” on the VM’s overview page, which is great.
I was expecting to get some data in this Insight log table HealthStateEventChange, when the service is down, but the table is always empty. Anyone who worked with this table or can give any support I would appreciate any.
https://learn.microsoft.com/en-us/azure/azure-monitor/reference/tables/HealthStateChangeEvent
I have a virtual machine hosted on Azure cloud, I wanted to add some monitoring on the health state of a service deployed on the VM. I already enabled the health monitoring extension to keep pinging the health URL of the service and return 200 or otherwise when the service is down. It shows green “Healthy” or yellow “Unhealthy” on the VM’s overview page, which is great. I was expecting to get some data in this Insight log table HealthStateEventChange, when the service is down, but the table is always empty. Anyone who worked with this table or can give any support I would appreciate any. https://learn.microsoft.com/en-us/azure/azure-monitor/reference/tables/HealthStateChangeEvent Read More
Phi-3 fine-tuning and new generative AI models are available for customizing and scaling AI apps
Developing and deploying AI applications at scale requires a robust and flexible platform that can handle the complex and diverse needs of modern enterprises. This is where Azure AI services come into play, offering developers the tools they need to create customized AI solutions grounded in their organizational data.
One of the most exciting updates in Azure AI is the recent introduction of serverless fine-tuning for Phi-3-mini and Phi-3-medium models. This feature enables developers to quickly and easily customize models for both cloud and edge scenarios without the need for extensive compute resources. Additionally, updates to Phi-3-mini have brought significant improvements in core quality, instruction-following, and structured output, allowing developers to build more performant models without additional costs.
Azure AI continues to expand its model offerings, with the latest additions including OpenAI’s GPT-4o mini, Meta’s Llama 3.1 405B, and Mistral’s Large 2. These models provide customers with greater choice and flexibility, enabling them to leverage the best tools for their specific needs. The introduction of Cohere Rerank further enhances Azure AI’s capabilities, offering enterprise-ready language models that deliver superior search results in production environments.
The Phi-3 family of small language models (SLMs) developed by Microsoft has been a game-changer in the AI landscape. These models are not only cost-effective but also outperform other models of the same size and even larger ones. Developers can fine-tune Phi-3-mini and Phi-3-medium with their data to build AI experiences that are more relevant to their users, safely and economically. The small compute footprint and cloud and edge compatibility of Phi-3 models make them ideal for a variety of scenarios, from tutoring to enhancing the consistency and quality of responses in chat and Q&A applications.
Microsoft’s collaboration with Khan Academy is a testament to the potential of Phi-3 models. Khan Academy uses Azure OpenAI Service to power Khanmigo for Teachers, an AI-powered teaching assistant that helps educators across 44 countries. Initial data shows that Phi-3 outperforms most other leading generative AI models in correcting and identifying student mistakes in math tutoring scenarios.
Azure AI’s commitment to innovation is further demonstrated by the introduction of Phi Silica, a powerful model designed specifically for the Neural Processing Unit (NPU) in Copilot+ PCs. This model empowers developers to build apps with safe, secure AI experiences, making Microsoft Windows the first platform to have a state-of-the-art SLM custom-built for the NPU.
The Azure AI model catalog now boasts over 1,600 models from various providers, including AI21, Cohere, Databricks, Hugging Face, Meta, Mistral, Microsoft Research, OpenAI, Snowflake, and Stability AI. This extensive selection ensures that developers have access to the best tools for their AI projects, whether they are working on traditional machine learning or generative AI applications.
Building AI solutions responsibly is at the core of AI development at Microsoft. Azure AI evaluations enable developers to iteratively assess the quality and safety of models and applications, informing mitigations and ensuring responsible AI deployment. Additional Azure AI Content Safety features, such as prompt shields and protected material detection, are now “on by default” in Azure OpenAI Service, providing an extra layer of security for developers.
Learn more about these recent exciting developments by checking out this blog: Announcing Phi-3 fine-tuning, new generative AI models, and other Azure AI updates to empower organizations to customize and scale AI applications | Microsoft Azure Blog
Developing and deploying AI applications at scale requires a robust and flexible platform that can handle the complex and diverse needs of modern enterprises. This is where Azure AI services come into play, offering developers the tools they need to create customized AI solutions grounded in their organizational data.
One of the most exciting updates in Azure AI is the recent introduction of serverless fine-tuning for Phi-3-mini and Phi-3-medium models. This feature enables developers to quickly and easily customize models for both cloud and edge scenarios without the need for extensive compute resources. Additionally, updates to Phi-3-mini have brought significant improvements in core quality, instruction-following, and structured output, allowing developers to build more performant models without additional costs.
Azure AI continues to expand its model offerings, with the latest additions including OpenAI’s GPT-4o mini, Meta’s Llama 3.1 405B, and Mistral’s Large 2. These models provide customers with greater choice and flexibility, enabling them to leverage the best tools for their specific needs. The introduction of Cohere Rerank further enhances Azure AI’s capabilities, offering enterprise-ready language models that deliver superior search results in production environments.
The Phi-3 family of small language models (SLMs) developed by Microsoft has been a game-changer in the AI landscape. These models are not only cost-effective but also outperform other models of the same size and even larger ones. Developers can fine-tune Phi-3-mini and Phi-3-medium with their data to build AI experiences that are more relevant to their users, safely and economically. The small compute footprint and cloud and edge compatibility of Phi-3 models make them ideal for a variety of scenarios, from tutoring to enhancing the consistency and quality of responses in chat and Q&A applications.
Microsoft’s collaboration with Khan Academy is a testament to the potential of Phi-3 models. Khan Academy uses Azure OpenAI Service to power Khanmigo for Teachers, an AI-powered teaching assistant that helps educators across 44 countries. Initial data shows that Phi-3 outperforms most other leading generative AI models in correcting and identifying student mistakes in math tutoring scenarios.
Azure AI’s commitment to innovation is further demonstrated by the introduction of Phi Silica, a powerful model designed specifically for the Neural Processing Unit (NPU) in Copilot+ PCs. This model empowers developers to build apps with safe, secure AI experiences, making Microsoft Windows the first platform to have a state-of-the-art SLM custom-built for the NPU.
The Azure AI model catalog now boasts over 1,600 models from various providers, including AI21, Cohere, Databricks, Hugging Face, Meta, Mistral, Microsoft Research, OpenAI, Snowflake, and Stability AI. This extensive selection ensures that developers have access to the best tools for their AI projects, whether they are working on traditional machine learning or generative AI applications.
Building AI solutions responsibly is at the core of AI development at Microsoft. Azure AI evaluations enable developers to iteratively assess the quality and safety of models and applications, informing mitigations and ensuring responsible AI deployment. Additional Azure AI Content Safety features, such as prompt shields and protected material detection, are now “on by default” in Azure OpenAI Service, providing an extra layer of security for developers.
Learn more about these recent exciting developments by checking out this blog: Announcing Phi-3 fine-tuning, new generative AI models, and other Azure AI updates to empower organizations to customize and scale AI applications | Microsoft Azure Blog Read More
Adding a redundant exchange server on Prem in hybrid environment
Hello,
I have an on Prem exchange 2016 server on Prem and I am looking for some advice adding a second one. We have several automated email generation processes running on our domain. When I have done exchange and/or windows updates emails have been dropped.
Kindly advice for best practice when adding a second 2016 exchange server.
Hello, I have an on Prem exchange 2016 server on Prem and I am looking for some advice adding a second one. We have several automated email generation processes running on our domain. When I have done exchange and/or windows updates emails have been dropped.Kindly advice for best practice when adding a second 2016 exchange server. Read More
Azure Machine Learning Pipeline Issue
Hello Team,
Currently, we are running a Large set of ML recommendation models in the Azure Compute Cluster while running this model it will take more than 5 days.
How can we run a large number of datasets in the Azure Compute cluster? For Example Around (5 million) records.
Find the sample Code :
import os
import pickle
import argparse
import pandas as pd
import json
from azureml.core import Workspace, Datastore, Run
from azureml.data.dataset_factory import TabularDatasetFactory
import tempfile
# Load environment variables
from dotenv import load_dotenv
load_dotenv()
# Parse arguments
parser = argparse.ArgumentParser(“model_training”)
parser.add_argument(“–model_training”, type=str, help=”Model training data path”)
parser.add_argument(“–interaction”, type=str, help=”Interaction type”)
args = parser.parse_args()
# Workspace setup
workspace = Workspace(subscription_id=os.environ.get(“SUBSCRIPTION_ID”),
resource_group=os.environ.get(“RESOURCE_GROUP”),
workspace_name=os.environ.get(“WORKSPACE_NAME”))
print(‘Workspace:’, workspace)
# Get the datastore from the Azure ML workspace
datastore = Datastore.get(workspace, datastore_name=’data_factory’)
print(‘Datastore:’, datastore)
# Define the path to your Parquet files in the datastore
datastore_path = [(datastore, ‘sampless_silver/’)]
# Create a TabularDataset from the Parquet files in the datastore
dataset = TabularDatasetFactory.from_parquet_files(path=datastore_path)
print(‘Dataset:’, dataset)
# Convert the TabularDataset to a Pandas DataFrame
training_dataset = dataset.to_pandas_dataframe()
print(‘Training Dataset:’, training_dataset)
# Sample data
training_dataset = training_dataset.head(25000000)
training_dataset = training_dataset.sample(frac=1).reset_index(drop=True)
training_dataset[“views”] = pd.to_numeric(training_dataset[‘views’], errors=’coerce’)
df_selected = training_dataset.rename(columns={‘clientId’: ‘userID’, ‘offerId’: ‘itemID’, ‘views’: ‘views’})
df_selected = df_selected[[‘userID’, ‘itemID’, ‘views’]]
print(‘Selected Data:’, df_selected)
# Create and fit model
from lightfm import LightFM
from lightfm import cross_validation
dataset = Dataset()
dataset.fit(users=df_selected[‘userID’], items=df_selected[‘itemID’])
(interactions, weights) = dataset.build_interactions(df_selected.iloc[:, 0:3].values)
user_dict_label = dataset.mapping()[0]
item_dict_label = dataset.mapping()[2]
train_interactions, test_interactions = cross_validation.random_train_test_split(
interactions, test_percentage=0.25, random_state=np.random.RandomState(2016))
model = LightFM(loss=’warp’, no_components=1300, learning_rate=0.000001,
random_state=np.random.RandomState(2016), user_alpha=0.000005, max_sampled=100, k=100,
learning_schedule=’adadelta’, item_alpha=0.000005)
print(‘Model:’, model)
model.fit(interactions=train_interactions, epochs=2, verbose=True, num_threads=8)
user_dict_label = {str(key): value for key, value in user_dict_label.items()}
item_dict_label = {str(key): value for key, value in item_dict_label.items()}
# Save and upload model
with tempfile.TemporaryDirectory() as tmpdirname:
recommendation_model_offer = os.path.join(tmpdirname, “sample_recommendation_model.pkl”)
with open(recommendation_model_offer, ‘wb’) as f:
pickle.dump(model, f)
model_intersection = os.path.join(tmpdirname, “sample_training_intersection.pkl”)
with open(model_intersection, ‘wb’) as f:
pickle.dump(interactions, f)
model_user_dict = os.path.join(tmpdirname, “users_dict_label.json”)
with open(model_user_dict, ‘w’) as f:
json.dump(user_dict_label, f)
model_item_dict = os.path.join(tmpdirname, “items_dict_label.json”)
with open(model_item_dict, ‘w’) as f:
json.dump(item_dict_label, f)
datastore.upload_files(
files=[recommendation_model_offer, model_intersection, model_user_dict, model_item_dict],
target_path=’SAMPLE_MODEL_TRAINING/’,
overwrite=True
)
print(‘Files uploaded to datastore’)
# Register the model
register_name = f”{args.interaction}_light_fm_recommendation_model”
Model.register(workspace=workspace, model_path=tmpdirname, model_name=register_name,
tags={‘affinity’: args.interaction, ‘sample’: ‘recommendation’})
print(‘Model registered’)
Please share the feedback. Thanks!
Hello Team,Currently, we are running a Large set of ML recommendation models in the Azure Compute Cluster while running this model it will take more than 5 days.How can we run a large number of datasets in the Azure Compute cluster? For Example Around (5 million) records.Find the sample Code :import osimport pickleimport argparseimport pandas as pdimport jsonfrom azureml.core import Workspace, Datastore, Runfrom azureml.data.dataset_factory import TabularDatasetFactoryimport tempfile# Load environment variablesfrom dotenv import load_dotenvload_dotenv()# Parse argumentsparser = argparse.ArgumentParser(“model_training”)parser.add_argument(“–model_training”, type=str, help=”Model training data path”)parser.add_argument(“–interaction”, type=str, help=”Interaction type”)args = parser.parse_args()# Workspace setupworkspace = Workspace(subscription_id=os.environ.get(“SUBSCRIPTION_ID”),resource_group=os.environ.get(“RESOURCE_GROUP”),workspace_name=os.environ.get(“WORKSPACE_NAME”))print(‘Workspace:’, workspace)# Get the datastore from the Azure ML workspacedatastore = Datastore.get(workspace, datastore_name=’data_factory’)print(‘Datastore:’, datastore)# Define the path to your Parquet files in the datastoredatastore_path = [(datastore, ‘sampless_silver/’)]# Create a TabularDataset from the Parquet files in the datastoredataset = TabularDatasetFactory.from_parquet_files(path=datastore_path)print(‘Dataset:’, dataset)# Convert the TabularDataset to a Pandas DataFrametraining_dataset = dataset.to_pandas_dataframe()print(‘Training Dataset:’, training_dataset)# Sample datatraining_dataset = training_dataset.head(25000000)training_dataset = training_dataset.sample(frac=1).reset_index(drop=True)training_dataset[“views”] = pd.to_numeric(training_dataset[‘views’], errors=’coerce’)df_selected = training_dataset.rename(columns={‘clientId’: ‘userID’, ‘offerId’: ‘itemID’, ‘views’: ‘views’})df_selected = df_selected[[‘userID’, ‘itemID’, ‘views’]]print(‘Selected Data:’, df_selected)# Create and fit modelfrom lightfm import LightFMfrom lightfm import cross_validationdataset = Dataset()dataset.fit(users=df_selected[‘userID’], items=df_selected[‘itemID’])(interactions, weights) = dataset.build_interactions(df_selected.iloc[:, 0:3].values)user_dict_label = dataset.mapping()[0]item_dict_label = dataset.mapping()[2]train_interactions, test_interactions = cross_validation.random_train_test_split(interactions, test_percentage=0.25, random_state=np.random.RandomState(2016))model = LightFM(loss=’warp’, no_components=1300, learning_rate=0.000001,random_state=np.random.RandomState(2016), user_alpha=0.000005, max_sampled=100, k=100,learning_schedule=’adadelta’, item_alpha=0.000005)print(‘Model:’, model)model.fit(interactions=train_interactions, epochs=2, verbose=True, num_threads=8)user_dict_label = {str(key): value for key, value in user_dict_label.items()}item_dict_label = {str(key): value for key, value in item_dict_label.items()}# Save and upload modelwith tempfile.TemporaryDirectory() as tmpdirname:recommendation_model_offer = os.path.join(tmpdirname, “sample_recommendation_model.pkl”)with open(recommendation_model_offer, ‘wb’) as f:pickle.dump(model, f)model_intersection = os.path.join(tmpdirname, “sample_training_intersection.pkl”)with open(model_intersection, ‘wb’) as f:pickle.dump(interactions, f)model_user_dict = os.path.join(tmpdirname, “users_dict_label.json”)with open(model_user_dict, ‘w’) as f:json.dump(user_dict_label, f)model_item_dict = os.path.join(tmpdirname, “items_dict_label.json”)with open(model_item_dict, ‘w’) as f:json.dump(item_dict_label, f)datastore.upload_files(files=[recommendation_model_offer, model_intersection, model_user_dict, model_item_dict],target_path=’SAMPLE_MODEL_TRAINING/’,overwrite=True)print(‘Files uploaded to datastore’)# Register the modelregister_name = f”{args.interaction}_light_fm_recommendation_model”Model.register(workspace=workspace, model_path=tmpdirname, model_name=register_name,tags={‘affinity’: args.interaction, ‘sample’: ‘recommendation’})print(‘Model registered’)Please share the feedback. Thanks! Read More
Share a dashboard and web app metrics with least privilege?
Hi, we have an Azure Web App publicly accessible.
I’d like to share the web app metrics through a shared dashboard to an internal customer who is in our MS Entra.
Everything for the web app is contained in a single Resource Group including the dashboard.
For the RBAC assignment, is Monitoring Reader on the Web App
and Read on the Dashboard appropriate or is there some other role that would be lesser privilege?
All I want is for them to be able to read the dashboard and metrics in the tiles.
Hi, we have an Azure Web App publicly accessible. I’d like to share the web app metrics through a shared dashboard to an internal customer who is in our MS Entra. Everything for the web app is contained in a single Resource Group including the dashboard.For the RBAC assignment, is Monitoring Reader on the Web Appand Read on the Dashboard appropriate or is there some other role that would be lesser privilege?All I want is for them to be able to read the dashboard and metrics in the tiles. Read More
How to reuse html customized aspx page in SharePoint online
Hi all, I have migrated site from SP 2016 to SharePoint online. Because of custom script disabled in my Online tenant, my requirement is convert customized aspx pages (html) to modern online pages. I know spfx is the platform to implement it. But could you please help me on how can I reuse the available html code into my spfx webpart, instead of developing all the pages from scratch.
Hi all, I have migrated site from SP 2016 to SharePoint online. Because of custom script disabled in my Online tenant, my requirement is convert customized aspx pages (html) to modern online pages. I know spfx is the platform to implement it. But could you please help me on how can I reuse the available html code into my spfx webpart, instead of developing all the pages from scratch. Read More
force user to choose “sign in to this app only” when they login to another MS account.
Hi ,
We have some part-time students who log in toOutlook with their university credentials. This action overrides the “Work or School Account” Azure domain joining and fully stops the computer from syncing with our tenant, causing conflicts with the Windows license. When I removed the school account, the computer immediately resumed syncing with Intune and the license became the correct one.
I would like to restrict Office accounts from taking over the domain joining to prevent these issues. Could you please provide guidance on how to implement this restriction?
or how to force user to choose “sign in to this app only” when they login to another MS account.
Thank you for your assistance.
Hi ,We have some part-time students who log in toOutlook with their university credentials. This action overrides the “Work or School Account” Azure domain joining and fully stops the computer from syncing with our tenant, causing conflicts with the Windows license. When I removed the school account, the computer immediately resumed syncing with Intune and the license became the correct one.I would like to restrict Office accounts from taking over the domain joining to prevent these issues. Could you please provide guidance on how to implement this restriction? or how to force user to choose “sign in to this app only” when they login to another MS account. Thank you for your assistance. Read More
Changing the organiser of Team meetings created via Booking Calendar to facilitate breakout rooms
We have an issue similar to this post: https://techcommunity.microsoft.com/t5/microsoft-bookings/choose-an-organiser-for-meetings-booked-in-bookings/m-p/3262100/thread-id/3321
We have a booking calendar which is set up with services that have group bookings slots with Teams links. The intention is for when these are booked, we can then break the group in to Breakout Rooms on the day. The problem is that in testing, we haven’t found a way to actually allow us to use this option. The organiser of these meetings is the booking calendar itself from the links created, this being automated by the app.
What I’d like to know is, has anyone who’s been in a similar situation managed to find to either switch on Breakout rooms using the app, or allow the organiser to be swapped so we can manually do it? I’m just surprised if this is a general issue, because I would expect this feature to be used by organisations. Any ideas?
We have an issue similar to this post: https://techcommunity.microsoft.com/t5/microsoft-bookings/choose-an-organiser-for-meetings-booked-in-bookings/m-p/3262100/thread-id/3321 We have a booking calendar which is set up with services that have group bookings slots with Teams links. The intention is for when these are booked, we can then break the group in to Breakout Rooms on the day. The problem is that in testing, we haven’t found a way to actually allow us to use this option. The organiser of these meetings is the booking calendar itself from the links created, this being automated by the app.What I’d like to know is, has anyone who’s been in a similar situation managed to find to either switch on Breakout rooms using the app, or allow the organiser to be swapped so we can manually do it? I’m just surprised if this is a general issue, because I would expect this feature to be used by organisations. Any ideas? Read More
After Removing GPO, Intune Policies Not Applying
Part of our fleet remains Entra Hybrid Join (as computers are refreshed, they are Entra Joined instead). We apply Windows Security Baselines through both Group Policy and Intune. Recently, we evaluated the differences between the two baselines and determined they are nearly identical. Accordingly, we decided to disable GPO based security baselines for Entra Hybrid Joined devices and let Intune push security settings for the baseline instead.
Here’s the expected behavior:
Security baseline settings are set by both Intune and GPO. By default, GPO wins, so the Intune setting is not applied.When the GPO settings are removed, at some point in the next 24 hours (I believe it happens every 8) all Intune policies are reapplied whether or not they have changed. With the GPOs gone, MDM policies that were once blocked by group policy are applied.The end result: all security policies are applied, but most of them are coming from Intune (MDM) instead of from GPOs.
However, this is not what is happening. While Intune claims the security baseline have applied, the settings that were once overridden by GPOs never apply and the computer effectively has no security baseline.
Here’s what I’ve done to try to fix this:
Make a copy of the existing baseline with a new name and assign it to the computers, unassign the original baseline. This does not work. The policies claim to have applied, but never apply on the endpoint.Change a single setting in the baseline hoping the change triggers the whole configuration reapplying. The endpoint only applies the changed setting, other settings in the baseline do not get applied.Unassign the baseline entirely, wait for the computer to sync and reassign the baseline. This works, but is not a viable solution for a large fleet of computers. This would be fine if all of our computers were receiving GPO updates regularly, but they’re not (they are remote). This only works if the computer syncs one time while no settings are applied and again after the configurations are reassigned. We can’t negotiate the timing on this for our whole fleet of computers.Apply the policy that makes MDM policies take precedence over GPOs. This did not work.
Here’s what we’re not willing to try (I’m preempting some of Microsoft’s usual boilerplate responses):
We will not reset the computers – there are too many for this to be a scalable solution.We will not unjoin and rejoin the computers from MDM – there are too many for this to be a scalable solution.
While I’m tempted to open a support case with Microsoft, this has only ever been a time-consuming and frivolous process. I expect they would pass the ticket around and eventually apologize to me when they decide this is a support case I should actually pay for.
Why would MDM policies not apply even after the group policies that once conflicted with them have been removed? This is impacting all Entra Hybrid Joined computers, the vast majority of which are running the latest build of Windows 11 23H2. Some of these computers have sat for 48 hours in this state, so I don’t think this is something that will be resolved with time.
Any advice would be greatly appreciated!
Part of our fleet remains Entra Hybrid Join (as computers are refreshed, they are Entra Joined instead). We apply Windows Security Baselines through both Group Policy and Intune. Recently, we evaluated the differences between the two baselines and determined they are nearly identical. Accordingly, we decided to disable GPO based security baselines for Entra Hybrid Joined devices and let Intune push security settings for the baseline instead. Here’s the expected behavior:Security baseline settings are set by both Intune and GPO. By default, GPO wins, so the Intune setting is not applied.When the GPO settings are removed, at some point in the next 24 hours (I believe it happens every 8) all Intune policies are reapplied whether or not they have changed. With the GPOs gone, MDM policies that were once blocked by group policy are applied.The end result: all security policies are applied, but most of them are coming from Intune (MDM) instead of from GPOs.However, this is not what is happening. While Intune claims the security baseline have applied, the settings that were once overridden by GPOs never apply and the computer effectively has no security baseline. Here’s what I’ve done to try to fix this:Make a copy of the existing baseline with a new name and assign it to the computers, unassign the original baseline. This does not work. The policies claim to have applied, but never apply on the endpoint.Change a single setting in the baseline hoping the change triggers the whole configuration reapplying. The endpoint only applies the changed setting, other settings in the baseline do not get applied.Unassign the baseline entirely, wait for the computer to sync and reassign the baseline. This works, but is not a viable solution for a large fleet of computers. This would be fine if all of our computers were receiving GPO updates regularly, but they’re not (they are remote). This only works if the computer syncs one time while no settings are applied and again after the configurations are reassigned. We can’t negotiate the timing on this for our whole fleet of computers.Apply the policy that makes MDM policies take precedence over GPOs. This did not work.Here’s what we’re not willing to try (I’m preempting some of Microsoft’s usual boilerplate responses):We will not reset the computers – there are too many for this to be a scalable solution.We will not unjoin and rejoin the computers from MDM – there are too many for this to be a scalable solution.While I’m tempted to open a support case with Microsoft, this has only ever been a time-consuming and frivolous process. I expect they would pass the ticket around and eventually apologize to me when they decide this is a support case I should actually pay for. Why would MDM policies not apply even after the group policies that once conflicted with them have been removed? This is impacting all Entra Hybrid Joined computers, the vast majority of which are running the latest build of Windows 11 23H2. Some of these computers have sat for 48 hours in this state, so I don’t think this is something that will be resolved with time. Any advice would be greatly appreciated! Read More
Export data from Log Analytics Workspace to Storage Account
Hello community,
Could you please recommend a solution to migrate data from Log Analytics Workspace (1 table) to Storage Account?
There are about 70 million rows that should be exported.
The continuous export is not the solution here.
We were thinking about a Logic App but there is too much data.
Hello community, Could you please recommend a solution to migrate data from Log Analytics Workspace (1 table) to Storage Account?There are about 70 million rows that should be exported.The continuous export is not the solution here.We were thinking about a Logic App but there is too much data. Read More
How to connect Azure DevOps Pipelines Variables to Azure Key Vault?
Variable groups in Azure DevOps provide a centralized and reusable way to manage these variables across multiple pipelines or stages within a pipeline.
Here are the key advantages of using variable groups:
Reuse variables across pipelines or stages, which reduces repetition and makes maintenance easier.
Update variable values in one place, which automatically applies the change to all pipelines or stages using that variable group. This makes maintenance simpler and less error-prone.
Keep variables consistent across pipelines, which avoids discrepancies that may happen when handling variables in each pipeline separately.
Advantages of storing credentials in Azure Key Vault:
Better Security: Azure Key Vault offers a secure and centralized way to store sensitive data. You can use Key Vault to keep sensitive information safe and hidden from the pipeline variables.
Access Management: Azure Key Vault lets you control access to stored variables, so you can set permissions for different users or applications.
While there are some limitations to consider, such as inflexible settable variables and stable Key Vault values, the benefits of migrating to Azure Key Vault generally outweigh these drawbacks.
Steps involved in migrating Azure DevOps Pipeline Variables to Azure Key Vault
Step 1: Create an Azure Key Vault in Azure Portal
Step 2: Create Secrets in Azure Key Vault
Step 3: Create a service connection in Azure DevOps
Step 4: Create Variable Groups in Azure DevOps
Provision access on the azure KV for service principal (App ID)
Step 5: Link the Azure Key Vault to variable group by ensuring the appropriate permissions on the service connection
Step 6: Link your Variable Group to the Pipeline
Step-by-Step elaborate Guide: Migrating Azure DevOps Pipeline Variables to Azure Key Vault
Step 1: Create an Azure Key Vault
Select Go to resource when the deployment of your new resource is completed.
You might face a problem while authorizing the Key Vault through a service connection. Here’s how you can resolve it:
Problem: During the authorization process, you may encounter an error indicating that the service connection lacks “list and get” permissions for the Key Vault.
Solution: Switch the permission mode to use access policies by accessing the Key Vault’s details page in the Azure Portal, clicking on “Access Configuration,” and switch to “Vault Access Policy” and apply. (RBAC will take care of it)
Select first option from the below page:
Step 2: Create Secrets in Azure Key Vault
With the proper permissions in place, create the corresponding secrets within the Azure Key Vault. For each variable in the pipeline, create a secret in the Key Vault with the same name and the respective value.
Step 3: Create service connection in Azure DevOps
Create a service connection
Sign in to your Azure DevOps organization, and then navigate to your project.
Select Project settings > Service connections, and then select New service connection to create a new service connection.
Select Azure Resource Manager, and then select Next.
Select Service principal (manual), and then select Next.
Select Azure Cloud for Environment and Subscription for the Scope Level, then enter your Subscription Id and your Subscription Name.
Fill out the following fields with the information you obtained when creating the service principal, and then select Verify when you’re done:
Service Principal Id: Your service principal appId.
Service Principal key: Your service principal password.
Tenant ID: Your service principal tenant.
Once the verification has succeeded, provide a name and description (optional) for your service connection, and then check the Grant access permission to all pipelines checkbox.
Select Verify and save when you’re done.
2 ways to create service connection –
Option 1: APPid created randomly – display name is same – app id is different
Option 2: create service principal first- first create app id and use it in service connection – have unique ID name in ADO and Azure portal – to be used
Step 4: Create Variable Groups in Azure DevOps (To link to Azure Key Vault in following steps)
Open the variables tab inside Pipelines->Library and choose the new variable groups
Add variable group name and description
Select check box for ‘Allow access to pipelines’ and ‘Link secrets from AzKeyVault as variables’
Select Azure subscription
Link secrets from an Azure key vault
In the Variable groups page, enable Link secrets from an Azure key vault as variables. You’ll need an existing key vault containing your secrets.
To link your Azure Key Vault to the variable group, ensure that you have the appropriate permissions on the service connection. Service connections provide the necessary credentials to access resources like Azure Key Vault. Grant the necessary permissions by configuring the access policies in the Azure Key Vault settings.
Step 5: Link your Variable Group to the Pipeline
To utilize the migrated variables from Azure Key Vault, link the variable group to your pipeline:
Go to the variables tab on your pipeline
Once you link the variable group to your pipeline, it will look like this:
Variable groups in Azure DevOps provide a centralized and reusable way to manage these variables across multiple pipelines or stages within a pipeline.
Here are the key advantages of using variable groups:
Reuse variables across pipelines or stages, which reduces repetition and makes maintenance easier.
Update variable values in one place, which automatically applies the change to all pipelines or stages using that variable group. This makes maintenance simpler and less error-prone.
Keep variables consistent across pipelines, which avoids discrepancies that may happen when handling variables in each pipeline separately.
Advantages of storing credentials in Azure Key Vault:
Better Security: Azure Key Vault offers a secure and centralized way to store sensitive data. You can use Key Vault to keep sensitive information safe and hidden from the pipeline variables.
Access Management: Azure Key Vault lets you control access to stored variables, so you can set permissions for different users or applications.
While there are some limitations to consider, such as inflexible settable variables and stable Key Vault values, the benefits of migrating to Azure Key Vault generally outweigh these drawbacks.
Steps involved in migrating Azure DevOps Pipeline Variables to Azure Key Vault
Step 1: Create an Azure Key Vault in Azure Portal
Step 2: Create Secrets in Azure Key Vault
Step 3: Create a service connection in Azure DevOps
Step 4: Create Variable Groups in Azure DevOps
Provision access on the azure KV for service principal (App ID)
Step 5: Link the Azure Key Vault to variable group by ensuring the appropriate permissions on the service connection
Step 6: Link your Variable Group to the Pipeline
Step-by-Step elaborate Guide: Migrating Azure DevOps Pipeline Variables to Azure Key Vault
Step 1: Create an Azure Key Vault
Select Go to resource when the deployment of your new resource is completed.
https://dev.azure.com/MSComAnalytics/DigitalStoresAnalytics/_wiki/wikis/DigitalStoresAnalytics.wiki/8379/keyvault-secret-tagging-checklist
You might face a problem while authorizing the Key Vault through a service connection. Here’s how you can resolve it:
Problem: During the authorization process, you may encounter an error indicating that the service connection lacks “list and get” permissions for the Key Vault.
Solution: Switch the permission mode to use access policies by accessing the Key Vault’s details page in the Azure Portal, clicking on “Access Configuration,” and switch to “Vault Access Policy” and apply. (RBAC will take care of it)
Select first option from the below page:
Step 2: Create Secrets in Azure Key Vault
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/set-secret-variables?view=azure-devops&source=recommendations&tabs=yaml%2Cbash
With the proper permissions in place, create the corresponding secrets within the Azure Key Vault. For each variable in the pipeline, create a secret in the Key Vault with the same name and the respective value.
Step 3: Create service connection in Azure DevOps
Create a service connection
Sign in to your Azure DevOps organization, and then navigate to your project.
Select Project settings > Service connections, and then select New service connection to create a new service connection.
Select Azure Resource Manager, and then select Next.
Select Service principal (manual), and then select Next.
Select Azure Cloud for Environment and Subscription for the Scope Level, then enter your Subscription Id and your Subscription Name.
Fill out the following fields with the information you obtained when creating the service principal, and then select Verify when you’re done:
Service Principal Id: Your service principal appId.
Service Principal key: Your service principal password.
Tenant ID: Your service principal tenant.
Once the verification has succeeded, provide a name and description (optional) for your service connection, and then check the Grant access permission to all pipelines checkbox.
Select Verify and save when you’re done.
https://learn.microsoft.com/en-us/azure/devops/pipelines/library/service-endpoints?view=azure-devops&tabs=yaml
2 ways to create service connection –
Option 1: APPid created randomly – display name is same – app id is different
Option 2: create service principal first- first create app id and use it in service connection – have unique ID name in ADO and Azure portal – to be used
Step 4: Create Variable Groups in Azure DevOps (To link to Azure Key Vault in following steps)
Open the variables tab inside Pipelines->Library and choose the new variable groups
Add variable group name and description
Select check box for ‘Allow access to pipelines’ and ‘Link secrets from AzKeyVault as variables’
Select Azure subscription
Link secrets from an Azure key vault
In the Variable groups page, enable Link secrets from an Azure key vault as variables. You’ll need an existing key vault containing your secrets.
To link your Azure Key Vault to the variable group, ensure that you have the appropriate permissions on the service connection. Service connections provide the necessary credentials to access resources like Azure Key Vault. Grant the necessary permissions by configuring the access policies in the Azure Key Vault settings.
Step 5: Link your Variable Group to the Pipeline
To utilize the migrated variables from Azure Key Vault, link the variable group to your pipeline:
Go to the variables tab on your pipeline
Once you link the variable group to your pipeline, it will look like this:
Entering Hanja (Korean) on Surface Laptop (Copilot+ PC) (US version)
Hello,
I bought the US version of the new Surface Laptop (Copilot+ PC) (13 “) last week. I regularly type in Korean and have just noticed that the new Copilot button has replaced the button next to the Right Alt key which is used to input Hanja on the Windows Korean keyboard. How do I do this now?
Thank you so much!
Best regards from New Orleans.
Hello, I bought the US version of the new Surface Laptop (Copilot+ PC) (13 “) last week. I regularly type in Korean and have just noticed that the new Copilot button has replaced the button next to the Right Alt key which is used to input Hanja on the Windows Korean keyboard. How do I do this now? Thank you so much! Best regards from New Orleans. Read More
How do I complain to Paytm?
Paytm has a contact 06370-523079 (Available 24/7) form on their website (www. Paytm com) that allows you to submit inquiries, feedback, or requests. You can access this by navigating to the “Contact Us
Paytm has a contact 06370-523079 (Available 24/7) form on their website (www. Paytm com) that allows you to submit inquiries, feedback, or requests. You can access this by navigating to the “Contact Us Read More
Edit tables with ease in Word for the web
Hi, Microsoft 365 Insiders,
Great news for Word for the web users! We are excited to announce a new feature that makes editing tables even smoother. You can now quickly and easily modify tables to improve your document’s formatting and appearance — no cutting or pasting required! This update allows you to effortlessly edit your tables so you can focus on your content.
Check out our latest blog by Anushri Sahu, Product Designer, and Kirti Sahu, Product Manager, from the Word team: Edit tables with ease in Word for the web
Thanks!
Perry Sjogren
Microsoft 365 Insider Community Manager
Become a Microsoft 365 Insider and gain exclusive access to new features and help shape the future of Microsoft 365. Join Now: Windows | Mac | iOS | Android
Hi, Microsoft 365 Insiders,
Great news for Word for the web users! We are excited to announce a new feature that makes editing tables even smoother. You can now quickly and easily modify tables to improve your document’s formatting and appearance — no cutting or pasting required! This update allows you to effortlessly edit your tables so you can focus on your content.
Check out our latest blog by Anushri Sahu, Product Designer, and Kirti Sahu, Product Manager, from the Word team: Edit tables with ease in Word for the web
Thanks!
Perry Sjogren
Microsoft 365 Insider Community Manager
Become a Microsoft 365 Insider and gain exclusive access to new features and help shape the future of Microsoft 365. Join Now: Windows | Mac | iOS | Android Read More
Viva Amplify Roadmap Blog
As we continue to innovate and enhance Microsoft Viva, we’re excited to share a glimpse into the future of Viva Amplify. Our commitment to providing a centralized platform for orchestrating and managing campaigns and communications remains strong, and we’re thrilled to announce new features and capabilities that will roll out in the coming months. Some of these features are geared towards corporate communicators as well as empowering anyone who needs to communicate to their teams, projects and stakeholders. We have more coming for Frontline Managers also that we’ll share at a later date.
Accelerate Copilot adoption with pre-built campaigns
Last month, Amplify added the Copilot Deployment Kit which includes 8 pre-drafted communications to help organizations plan, communicate, and adopt Copilot. And now, to help with broader adoption of Copilot across the various Viva applications, we’re adding a new Viva for AI Transformation pre-built campaign to help corporate communicators and change management leaders with their AI transformation efforts by highlighting specific capabilities within each Viva module.
The Viva for AI Transformation campaign includes 10 pre-drafted communications and a campaign brief with objectives and key messages. Each communication can easily be edited, reviewed, and published to multiple channels—including SharePoint, Outlook, and Teams— highlighting the specific AI capabilities available in each Viva module and how employees and the organization can benefit from them.
Copilot in Viva Amplify Editor
We’re bringing the superpowers of Copilot directly into the Amplify editing experience to revolutionize the way you create and enhance content by providing you with a writing assistance for all your communications. Simply click the Copilot icon for help with content, style, rewrites, and tone. Copilot in Viva Amplify will be available in preview soon.
The Auto rewrite option quickly brings good suggestions to you based on the text you’ve already entered. Or use it to pick specific enhancements like using more concise or expansive language.
Moreover, Copilot will help you adjust the tone of your content to ensure a consistent tone across all your content or make your messaging more coherent so it resonates better with different audience segments. With this capability, you will be able to adapt your content to various tones, whether you need a casual tone for social messages, an engaging tone to compel and draw in your audience or a professional tone for business communications.
Required Approvals
Like Lists and libraries, campaigns can contain sensitive information, such as marketing campaign budgets or human resources initiatives. The required approval feature brings compliance, accountability and workflows to Lightweight Approvals in Viva Amplify. By enabling required approval for a campaign, stakeholders can ensure that all campaign content and associated publications adhere to organizational standards and receive the necessary approval before publishing, thus minimizing risks and errors.
You can require approval at the campaign level so that all Viva Amplify publications within the campaign go through the approval process before the content is published. This is an optional setting that a user can choose to apply to a campaign. By requiring approval, organizations can apply a significant level of quality and security to their content, ensuring every piece of content aligns perfectly with their standards and expectations.
Required approval is targeted to be generally available in August 2024.
Campaign goals
Coming soon, you’ll be able to define the goals and objectives of a campaign within Viva Amplify and track progress against these goals using campaign goals. Goals establish a clear path for a campaign, guiding every action and decision, and providing benchmarks for measuring progress so you can achieve your campaign objective (s). In this coming release, Viva Amplify will support goal tracking for the unique viewers metric integrated with analytics capabilities. When you set a campaign goal in the brief, it will be applied at the campaign level for all publications and published distribution channels. By setting specific targets, you can track progress and determine whether the campaign is meeting its goals for all distribution channels. Campaign goals empower you to make informed decisions and adjustments as needed throughout the campaign.
Copy a publication
Gone are the days where you must rewrite or manually copy and paste content from an old publication to a newly drafted one to reuse it. Soon in Viva Amplify, you will be able to copy a publication within an existing campaign with just a few clicks. This new feature streamlines the content creation process, enabling you to easily reuse existing content – across SharePoint, Outlook and Teams, including all channel specific customizations and related audiences – and saving you time and effort so you can be more efficient.
Switch quickly between content editing, channels and writing guidance
Coming soon, you will see the SharePoint content pane also available in Viva Amplify. The content pane serves as a convenient hub for various panes that support authors in crafting their publications. This centralized space now features a user-friendly toolbox that enables authors to easily explore and insert content for creating dynamic and captivating publications and incorporates other useful panes like configuration tools and design ideas. Additionally, and specific to Viva Amplify, it also hosts the distribution channel selection, writing guidance, and audience selection specific to the distribution channels. With this change we are also introducing the ability to add or remove channels directly from the distribution channel tabs.
Streamlined Authoring Experience in Teams and Outlook
Coming soon, we are rolling out updates to the editors for the Microsoft Outlook and Microsoft Teams distribution channels, to streamline previewing and editing. You will see the new editing experience when creating a new publication as part of a new or existing campaign and select to publish to Outlook and Teams. The new experience will enable you to customize the content for Outlook and Teams using a supported set of familiar web parts directly from the main drafting experience and improvements for loading content into the editor.
In addition to the changes to the canvas for preview and customization, you will be able to select the audience for the channel on the right side of the screen, independent from the editing canvas. You will continue to be able to switch between Preview and Customize and send test emails to verify how the published email is received in the different Outlook clients or is posted in Teams.
The streamlined authoring experience for Teams and Outlook channels will be rolling out in August and September.
Analytics
Reporting and analytics are a crucial piece of the Amplify value, and soon you’ll be able to go even deeper into engagement and capture new metrics. In the images below we’re showing designs because we want to illustrate the breadth of capabilities coming.
Let’s go deeper on how effective your campaigns and communications are with these new metrics and capabilities, including:
Audience Breakdown and organizational pivots – see engagement filtered by role, department, or other user information.
Campaign Brief Integration Amplify Analytics provides visuals feedback to campaign owners of progress as measured against the goals set in the Campaign Brief.
Trend graphs and simpler layouts – visualize data over time with easy-to-read charts
Reactions – understand the social gestures of the reactions you’ve received on your publication and the entire campaign
Export to PowerPoint – you can already download the reports to CSV, and we’re making it quick to present your communication progress in slides
Click through rate – see the performance of links and read rates within your publications.
Dwell time – understand how long viewers are spending viewing your publications
Multi-value queries – queries allow the user to selected multiple different Org metadata values combined with endpoints to created “and” queries that provide a deep context and understanding.
Viva Engage integration
Already in Private Preview, this is one of the most requested features is the ability to publish from Viva Amplify to Viva Engage communities and storylines. Analytics signals for Engage distribution are already included in our existing reports in private preview. We’re listening to preview customer feedback to improve the experience for the next version. Top requests such as support to publish as Articles in Engage and across multiple communities are already being looked at and we appreciate getting your feedback on what is most important to you when publishing to Engage from Viva Amplify.
Looking Ahead
As we build upon the success of Viva Amplify, we’re eager to hear your feedback and involve you in shaping the future of our platform. Stay tuned for more updates and get ready to amplify your communications with Microsoft Viva.
Microsoft Tech Community – Latest Blogs –Read More