Month: November 2024
Announcing the general availability of sidecar extensibility in Azure App Service
The cloud has changed the business landscape—continuous innovation is no longer optional, and enterprise apps must modernize or perish. Generative AI is being adopted quickly not only to build net-new apps, but also to transform existing apps to deliver higher value AI-assisted outcomes. App modernization means updating and improving apps so they’re able to work well with evolving software and cloud. There are several ways to achieve this, whether by a simple ‘lift and shift’ of applications to the cloud or rewriting them as cloud-based services that work well with other modern solutions.
Today, we are excited to announce general availability of Sidecars in Azure App Service, a versatile pathway for organizations to modernize existing apps and add various kinds of powerful new capabilities without having to significantly rewrite the main application code.
What are sidecars, and where could they fit into your modernization story?
What is sidecar pattern? Think of it like a motorcycle with an attached sidecar. You pair your primary enterprise app (the motorcycle) with a modern, supplementary set of tasks placed in their own container (the sidecar). These tasks enable you to add new capabilities by adding SDKs—like AI, logging, monitoring, and security features—to your primary application without the need to significantly modify and redeploy the app.
The sidecar pattern enables you to bring modern capabilities such as AI to your legacy apps much faster without rebuilding the primary app from the ground up. There are lifecycle and integration advantages as well: the primary app and sidecar capabilities can be treated and managed as a single unit. They’re easier to maintain since they remain distinct apps, and you can modify components of the sidecar independently without changing the entire app (and vice versa).
Using sidecars to add modern features to your existing apps
Let’s walk through a few app modernization scenarios where the sidecar approach may help.
Infuse generative AI into existing apps. Microsoft offers multiple AI models to help you get started in your AI journey, whether you choose a large language model (LLM) such as Azure OpenAI, or a small language model (SLM) like Phi-3. SLMs are lightweight, efficient, and accessible, making them ideal for limited computational resources and real-time inference. They can be fine-tuned for specific domains, enhancing performance and accuracy. SLMs are more secure, environmentally friendly, and suitable for edge computing due to their smaller codebases, lower energy use, and faster inference times. In this example, you can integrate Phi-3 ONNX using a sidecar pattern to deploy and run intelligent apps on Azure App Service without utilizing GPUs. This approach can offer a faster way to get started with AI with relatively lower cost. It also serves as an onramp to scale up to more generalized and powerful large language models (LLMs) such as deploying OpenAI-powered apps on Azure App Service in the future.
Gain modern observability, reporting, and analytics. Observability has become crucial for modern AI-infused applications. A sidecar gives you the ability to integrate functionality for metrics, logging, and tracing without having to completely rebuild the primary app. This includes first party solutions such as Azure Monitor Application Insights, a feature of Azure Monitor that excels in application performance monitoring (APM) for live web applications, as well as Azure Native ISV service partners such as Dynatrace, Datadog, and others. Learn more.
Improve web app performance during high traffic. Redis Cache is an in-memory data store, which means it can retrieve data much faster than traditional databases. This can lead to application performance improvements, especially for read-heavy workloads. Azure Cache for Redis integrates seamlessly with other Azure services, such as Azure App Service, Azure Kubernetes Service (AKS), and Azure Functions. By deploying Redis cache as a sidecar, you can more easily unlock the benefits of caching and enable improved performance for your web applications.
These are just a few examples of how you can use sidecar to modernize your applications. The possibilities are only limited by your imagination!
Get started today with Azure App Service
Visit the product documentation to learn more and get started. Watch the learn live session on-demand for a demo of the new features from our experts. Join us at Microsoft Ignite 2024 and discover solutions that help modernize and manage intelligent apps, including demos of sidecar pattern and an opportunity to connect with the engineers building this technology in the Expert meet-up areas.
If you’re new to Azure, take advantage of our free credit program by signing up for a free account at Azure App Service.
Here are some other helpful resources to learn more:
Catch A Glimpse into the Future: The Sidecar Pattern on Linux App Service
Visit Microsoft Learn to watch a tutorial on configuring a sidecar container
Read about building smarter apps by integrating Phi-3 SLM with Linux App Service
Learn how to implement local RAG using Phi-3 ONNX Runtime and Sidecar Pattern on Linux App Service
Revisit many of the links featured in this blog:
Optimizing SLM with ONNX Runtime: Phi-3 on CPU with Sidecars for App Service – Azure App Service
A Step-by-Step Guide to Datadog Integration with Linux App Service via Sidecars – Azure App Service
Powering Observability: Dynatrace Integration with Linux App Service via Sidecars – Azure App Service
Leveraging Redis as a Sidecar for Linux App Service – Azure App Service
Deploying OpenAI-powered apps on Azure App Service
Microsoft Tech Community – Latest Blogs –Read More
The Future of AI: Maximize your fine-tuned model performance with the new Azure AI Evaluation SDK
The Future of AI: LLM Distillation just got easier
Part 4 – Maximize your fine-tuned model performance with the new Azure AI Evaluation SDK
By Cedric Vidal, Principal AI Advocate, Microsoft
Part of the Future of AI 🚀 series initiated by Marco Casalaina with his Exploring Multi-Agent AI Systems blog post.
Generated using Azure OpenAI DALL-E 3
In earlier posts of this distillation series, we detailed the process of distilling a Llama 3.1 405B model into a more compact Llama 3.1 8B model. This journey included generating a synthetic dataset using RAFT, as well as fine-tuning and deploying our student model on Azure AI Serverless.
But how can we confirm that our distilled model performs optimally? The crucial final step is evaluating the model.
Effective model evaluation is key to ensuring that our AI systems function as expected and meet the desired standards. With the introduction of the Azure AI Evaluation Python SDK, we now have a powerful toolkit for assessing AI models through advanced metrics. In this blog post, we’ll look at evaluating a distilled student model, which was trained with data generated by RAFT, and compare it against a baseline model.
In our setup, Llama 3.1 405B functions as the teacher, Llama 3.1 8B serves as the student model and GPT-4 serves as the judge.
Why evaluate?
Evaluating distilled student models is crucial because it allows us to assess how effectively knowledge has been transferred from the teacher model to the student model. Distillation aims to compress a larger, more complex model into a smaller, more efficient one without significantly sacrificing performance. By thoroughly evaluating the distilled models, we ensure they not only mimic the teacher model’s outputs but also maintain high levels of accuracy, coherence, and relevance. This evaluation process helps identify areas where the student model may need further fine-tuning and ensures that the distilled models are ready for deployment in resource-constrained environments where computational efficiency is paramount.
Process Overview
Evaluating the performance of our models involves several key steps, which can be broadly categorized under Testing and Scoring.
Testing
Run the Baseline Model on the Evaluation Split: Our first step is to run the teacher model (Llama 3.1 405B) on the evaluation split to generate its predictions.
Run the Student Model on the Evaluation Split: Next, we run the student model on the same evaluation dataset to generate its predictions.
Scoring
Calculate Metrics for the Baseline Model: Using the predictions from the baseline model, we calculate various performance metrics.
Calculate Metrics for the Student Model: Similarly, we calculate the performance metrics for the student model’s predictions.
Compare Metrics: Finally, we compare the performance of both models, highlighting the results through visuals and diagrams.
Testing the baseline and student models
Installing the SDK
First, you need to install the Azure AI Evaluation SDK:
pip install openai azure-ai-evaluation azure-identity promptflow-azure
Note on SDK Availability: It’s important to highlight that the Azure AI Evaluation SDK is currently in beta. This means that while the SDK offers a comprehensive suite of tools and features for evaluating AI models, it may still undergo changes and improvements. Users should stay updated with any modifications or enhancements introduced by Azure, and consider providing feedback to help refine and optimize the SDK for wider use in its official release.
Baseline Model Testing
This will generate answers to the questions in the evaluation dataset using the baseline model:
env $(cat .env .env.state) python .gorilla/raft/eval.py
–question-file $dataset_path_hf_eval
–answer-file $dataset_path_hf_eval_answer_baseline
–model $BASELINE_OPENAI_DEPLOYMENT
–env-prefix BASELINE
–mode $BASELINE_MODEL_API
Note: JSONL file format needs to be further converted to a format suitable for testing, see eval notebook for details.
Student Model Testing
This will generate answers to the questions in the evaluation dataset using the student model:
env $(cat .env .env.state) python .gorilla/raft/eval.py
–question-file $dataset_path_hf_eval
–answer-file $dataset_path_hf_eval_answer
–model $STUDENT_DEPLOYMENT_NAME
–env-prefix STUDENT
–mode $STUDENT_MODEL_API
Note: JSONL file format needs to be further converted to a format suitable for testing, see eval notebook for details.
Let’s look at a sample
This sample is extracted from the evaluation split and shows the baseline and student answers:
question: What types of waves do strong direct offshore winds create?
gold_final_answer: plunging or large barrel waves
context <DOCUMENT>Lefts, Rights, and A-frames could be directed from this pump design providing forWave intensity Artificial reefs Artificial wavesSurfing a stationary, artificialwave in Southern California A surfer going for the tube Catching waves at a surfing conteston the North Shore of Oahu, Hawaiirippable surf and barrel rides. The Ocean Dome cost about $2 billion tobuild and was expensive to maintain.[31] The Ocean Dome was closed in2007.</DOCUMENT> <DOCUMENT>However, thewaves that are produced by reef breaks are some of the best in the world. Famous reef breaks arepresent in Padang Padang (Indonesia), Pipeline (Hawaii), Uluwatu (Bali), and Teahupo’o(Tahiti).[49][52]A ledge break is formed by steep rocks ledges that make intense waves because the waves travelthrough deeper water then abruptly reach shallower water at the ledge. Shark Island, Australia is alocation with a ledge break.</DOCUMENT>
baseline_answer: Strong direct offshore winds create plunging or large barrel waves. These waves are characterized by their increased height and intensity due to the shallow water depth when they break.
student_answer: plunging or large barrel waves
This sample was chosen randomly and in this case, the student model answer is identical to the gold answer. This is not always the case.
Evaluating the baseline and student model responses
Built-in Evaluators
The Azure AI Evaluation SDK offers an extensive suite of built-in metrics, designed to facilitate comprehensive evaluation of AI models. In the following sections, we’ll highlight selected evaluators and provide detailed examples of their application, showcasing how they can enhance your model assessments.
They are categorized into two main groups: (1) metrics that leverage GPT models for scoring, providing advanced qualitative assessments, and (2) metrics that utilize straightforward mathematical calculations for evaluation.
GPT based metrics
Category
Evaluator Class
Notes
Quality
GroundednessEvaluator
Groundedness measures the extent to which the generated content is based on factual correctness and aligns with the provided data or context.
RelevanceEvaluator
Relevance assesses how pertinent the generated text is to the given input or prompt. Higher relevance scores indicate that the generated responses are more appropriate and closely aligned with the query or topic.
CoherenceEvaluator
Coherence measures how logically consistent and semantically meaningful the generated text is. Higher coherence indicates better understanding and logical consistency.
FluencyEvaluator
Fluency evaluates how naturally the generated text reads. Fluent text should be grammatically correct and smooth in its flow.
SimilarityEvaluator
Measures the similarity between the predicted answer and the correct answer
Content Safety
ViolenceEvaluator
SexualEvaluator
SelfHarmEvaluator
HateUnfairnessEvaluator
Composite
QAEvaluator
Built on top of individual quality evaluators.
ChatEvaluator
Similar to QAEvaluator but designed for evaluating chat messages.
ContentSafetyEvaluator
Built on top of individual content safety evaluators.
Math based metrics
Evaluator Class
Notes
BleuScoreEvaluator
BLEU (Bilingual Evaluation Understudy) is a widely-used metric for evaluating the quality of text generated by an AI by comparing it to one or more reference texts. It particularly looks at the precision of n-grams in the generated text.
RougeScoreEvaluator
ROUGE (Recall-Oriented Understudy for Gisting Evaluation) primarily measures recall, comparing n-grams between the generated text and reference texts. It is commonly used for evaluation in summarization tasks.
F1ScoreEvaluator
A balance between precision and recall, the F1 score provides a single metric that combines both, offering a more comprehensive view of performance in classification problems.
Running metrics individually
The Azure AI Evaluation SDK enables the utilization of individual metrics. This feature is particularly useful for experimentation, gaining deeper insights, and incorporating metrics into bespoke evaluation workflows.
Tech Tip: This blog post is crafted using the Quarto writing system, a versatile tool for publishing with code. The Azure AI Evaluation metrics are seamlessly executed and displayed inline within this post.
Let’s look first at the F1 Score math metric
For a response that is accurate but includes additional information not found in the ground truth:
from azure.ai.evaluation import F1ScoreEvaluator
f1_score_evaluator = F1ScoreEvaluator()
f1_score = f1_score_evaluator(
ground_truth=”The capital of Japan is Tokyo.”,
response=”Tokyo is Japan’s capital, known for its blend of traditional culture”
)
print(f”The F1 Score is {round(f1_score[‘f1_score’], 2)}”)
The F1 Score is 0.5
For a response that is accurate but uses the same words turned differently:
from azure.ai.evaluation import F1ScoreEvaluator
f1_score_evaluator = F1ScoreEvaluator()
f1_score = f1_score_evaluator(
ground_truth=”The capital of Japan is Tokyo.”,
response=”Tokyo is Japan’s capital”
)
print(f”The F1 Score is {round(f1_score[‘f1_score’], 2)}”)
The F1 Score is 0.67
Let’s look first at the Similarity GPT metric
We first need to instantiate the Judge model client:
from os import getenv
from azure.ai.evaluation import AzureOpenAIModelConfiguration
model_config = AzureOpenAIModelConfiguration(
azure_endpoint = getenv(“JUDGE_AZURE_OPENAI_ENDPOINT”),
azure_deployment = getenv(“JUDGE_AZURE_OPENAI_DEPLOYMENT”),
api_version = getenv(“JUDGE_OPENAI_API_VERSION”),
)
Let’s now instantiate the Similarity score metric:
from azure.ai.evaluation import SimilarityEvaluator
similarity_evaluator = SimilarityEvaluator(model_config)
For a response that is accurate but includes additional information not found in the ground truth:
similarity = similarity_evaluator(
query=”What’s the capital of Japan?”,
ground_truth=”The capital of Japan is Tokyo.”,
response=”Tokyo is Japan’s capital, known for its blend of traditional culture”
)
print(f”The Similarity is {similarity[‘gpt_similarity’]}”)
The Similarity is 4.0
For a response that is accurate but uses the same words turned differently:
similarity = similarity_evaluator(
query=”What’s the capital of Japan?”,
ground_truth=”The capital of Japan is Tokyo.”,
response=”Tokyo is Japan’s capital”
)
print(f”The Similarity is {similarity[‘gpt_similarity’]}”)
The Similarity is 5.0
GPT-based similarity metrics demonstrate greater robustness in evaluating correct responses that are phrased differently compared to traditional F1 Scores.
Running metrics in bulk
While evaluating metrics individually helps in understanding their functionality, acquiring statistically significant results necessitates running them on a larger scale across an evaluation dataset.
The Azure AI Evaluation SDK provides a convenient bulk evaluation capability via the evaluate function.
To begin, we need to initialize the evaluators that will be used to assess the student and baseline models:
from azure.ai.evaluation import CoherenceEvaluator, F1ScoreEvaluator, FluencyEvaluator, GroundednessEvaluator, RelevanceEvaluator, SimilarityEvaluator, BleuScoreEvaluator, RougeScoreEvaluator, RougeType
# Initializing evaluators
evaluators = {
# GPT based metrics
“coherence” : CoherenceEvaluator(model_config),
“f1_score” : F1ScoreEvaluator(),
“fluency” : FluencyEvaluator(model_config),
“groundedness” : GroundednessEvaluator(model_config),
“relevance” : RelevanceEvaluator(model_config),
“similarity” : SimilarityEvaluator(model_config),
# Math metrics
“bleu” : BleuScoreEvaluator(),
“rouge_1” : RougeScoreEvaluator(RougeType.ROUGE_1),
“rouge_2” : RougeScoreEvaluator(RougeType.ROUGE_2),
}
Note that we have previously executed the baseline and student models on the evaluation dataset, which means the JSONL file provided to the evaluate function already includes their responses. Consequently, further model invocations are unnecessary at this stage.
Recommendation: It’s often beneficial to run the baseline and student models once initially. By doing so, you can execute the evaluate function multiple times with various metrics configurations without re-incurring the inference time and costs associated with model executions. Note that while this avoids repeated inference expenses, using GPT-based metrics will still incur costs and time for each evaluate execution, as the Judge model is utilized.
from azure.ai.evaluation import evaluate
result = evaluate(
data=”test-results-[baseline|student].jsonl”,
evaluators=evaluators,
evaluator_config={
“default”: {
“column_mapping”: {
“query”: “${data.question}”,
“response”: “${data.final_answer}”,
“ground_truth”: “${data.gold_final_answer}”,
“context”: “${data.context}”,
}
}
},
)
This command initiates a background process that hosts a user interface locally. Here is an example of its appearance:
The interface updates in real-time to display the progress of the scoring process on the evaluation dataset.
Additionally, you can click on each completed line to view the detailed trace of the calls. This feature is particularly useful for GPT-based metrics, as it reveals the system prompt used and provides insights into the underlying logic that contributed to the final score:
Comparing Metrics and Visualizing Results
Note: You can find the implementation details for generating the comparison figures of baseline and student metrics in the repository notebook. This resource provides comprehensive insights into how the metric comparisons were conducted, along with the code necessary to reproduce these visualizations.
Going further with continuous model evaluation and GenAIOps
This marks the beginning of a continuous improvement journey. It’s quite common to find that the student model’s initial performance does not meet expectations. Through our evaluation, we may uncover areas needing adjustment—whether it’s refining the synthetically generated dataset, optimizing fine-tuning parameters, or other elements. This initiates a cycle of iterative improvement and reassessment before the model is ready for deployment in production.
To effectively help you navigate this process, we came up with the GenAIOps Maturity Model, which serves as a comprehensive guide for evaluating your progress and maturity in operationalizing AI models.
Conclusion
By leveraging the Azure AI Evaluation Python SDK, we gain a detailed understanding of how our distilled student model compares to the baseline model across a spectrum of performance indicators. This structured evaluation framework not only helps in refining our models but also ensures that we are continuously improving and delivering robust AI solutions.
Explore, fork and clone the comprehensive 🚀🔥 GitHub Recipe Repository for complete code coverage on executing the full distillation process, including in-depth evaluations as detailed in this blog post. Discover step-by-step notebooks and resources to master the entire pipeline efficiently.
Stay tuned for more insights and tutorials on advanced AI topics and the latest tools available in the Azure ecosystem!
Microsoft Tech Community – Latest Blogs –Read More
Issues with populated by Power Automate Date and Time column in excel
Good day.
I created a Power Automate flow that takes a date and time and populates them into an Excel spreadsheet. However, once imported, the date and time are recognized by Excel as text, not as date and time values. This persists until I select the cell, place my cursor in the formula bar, and press Enter. Only then are they recognized as date and time. I am using formatDateTime in Power Automate to export the date and time.
Any help would be appreciated.
Good day.I created a Power Automate flow that takes a date and time and populates them into an Excel spreadsheet. However, once imported, the date and time are recognized by Excel as text, not as date and time values. This persists until I select the cell, place my cursor in the formula bar, and press Enter. Only then are they recognized as date and time. I am using formatDateTime in Power Automate to export the date and time.Any help would be appreciated. Read More
Inbound Sensitive Information
Hello All,
We currently have some DLP policies to restrict Financial Data, HIPPA, and PII data from leaving our org.
However, is there a way to restrict this type of sensitive data from being sent into the org? For example, an external address sends some sensitive data to a specific mailbox. Can a DLP policy be created to block that data from reaching a specific mailbox and reply back the email was blocked due to the content?
Thanks for any info!
Hello All,We currently have some DLP policies to restrict Financial Data, HIPPA, and PII data from leaving our org. However, is there a way to restrict this type of sensitive data from being sent into the org? For example, an external address sends some sensitive data to a specific mailbox. Can a DLP policy be created to block that data from reaching a specific mailbox and reply back the email was blocked due to the content? Thanks for any info! Read More
message “bientôt disponible” sur la barre latérale de copilot
Bonjour,
Comment faire marcher à nouveau mon copilot ?
Je l’utilise surtout pour la fonction de conversation par rapport à des documents ouverts sur la page.
Depuis peu, Copilot m’affiche “bientôt disponible”, et je ne peux rien faire pour changer cela. La fonction conversation m’est impossible…
Comment faire ?
Bonjour, Comment faire marcher à nouveau mon copilot ? Je l’utilise surtout pour la fonction de conversation par rapport à des documents ouverts sur la page. Depuis peu, Copilot m’affiche “bientôt disponible”, et je ne peux rien faire pour changer cela. La fonction conversation m’est impossible… Comment faire ? Read More
Power Query: dynamic source, use parameters?
Hi all
I’m trying to make my Power Queries more flexible.
Starting position: I have three queries that have two xlsx files (SAP exports) as a base (connections only). The two files show Equipment data from different dates, which are compared using the queries. They show new equipments, removed equipments, and changes.
The files are named by date, so 01/08/2024, 01/09/2024 etc. And are all in the same folder.
My goal: I want the end user to be able to pick any two SAP exports within Excel and the queries to update accordingly. They should not use Power Query Editor to change the query source.
I tried parameters but failed when connecting it to the dedicated cell in Excel.
What are the steps using parameters? Or are there alternative solutions?
Thanks from a beginner!
Hi all I’m trying to make my Power Queries more flexible. Starting position: I have three queries that have two xlsx files (SAP exports) as a base (connections only). The two files show Equipment data from different dates, which are compared using the queries. They show new equipments, removed equipments, and changes.The files are named by date, so 01/08/2024, 01/09/2024 etc. And are all in the same folder. My goal: I want the end user to be able to pick any two SAP exports within Excel and the queries to update accordingly. They should not use Power Query Editor to change the query source. I tried parameters but failed when connecting it to the dedicated cell in Excel. What are the steps using parameters? Or are there alternative solutions? Thanks from a beginner! Read More
Macro/VBA to merge to documents
Hi All,
Struggling to find VBA/Macro code that will suit my needs. I have a file that will become the parent file for a new system. In this file there is roughly 1600 rows already filled out. There is then a separate file call it the donor file that has information that needs to be copied to the parent file. I need a macro that can pull this information from the donor file and copy it to the parent. However, there is 3 columns each cell within these columns having different criteria that is required to match between the two files prior to moving the information. Given the criteria matches I then need to insert the number of rows that have matching criteria in the parent file. Then those newly inserted rows needs to be filled with information from 2 different columns within the given donor file as well as the 3 different criteria set by the parent file.
In short the goal is to make a tree like file. Where the inserted rows have information from the only the row above and are the number of rows inserted is based on the number of rows that match the criteria set. The inserted rows then need to be filled with 2 pieces of information from one file and 3 pieces of information from another file.
I have been struggling to find an solution to this problem, please send help…😭😂
Hi All, Struggling to find VBA/Macro code that will suit my needs. I have a file that will become the parent file for a new system. In this file there is roughly 1600 rows already filled out. There is then a separate file call it the donor file that has information that needs to be copied to the parent file. I need a macro that can pull this information from the donor file and copy it to the parent. However, there is 3 columns each cell within these columns having different criteria that is required to match between the two files prior to moving the information. Given the criteria matches I then need to insert the number of rows that have matching criteria in the parent file. Then those newly inserted rows needs to be filled with information from 2 different columns within the given donor file as well as the 3 different criteria set by the parent file. In short the goal is to make a tree like file. Where the inserted rows have information from the only the row above and are the number of rows inserted is based on the number of rows that match the criteria set. The inserted rows then need to be filled with 2 pieces of information from one file and 3 pieces of information from another file. I have been struggling to find an solution to this problem, please send help…😭😂 Read More
Defender for Server: Azure or License based?
Hello!
In October of 2022, Microsoft took the Defender for Endpoint Server sku off the price file and replaced with with Defender for Endpoint plan 1 and plan 2 which are purchased through Azure. Our company started migrating customers from the old sku to the new azure model and all was well. For the last six months or more, Defender for Server is now showing up on our NCE price file once more. After Microsoft told me so start selling the consumption sku, I am now unsure what my next move should be. Are you selling both products? Is there a use case for each? Anyone had any experience wit this issue?
Thanks in advance!
Hello! In October of 2022, Microsoft took the Defender for Endpoint Server sku off the price file and replaced with with Defender for Endpoint plan 1 and plan 2 which are purchased through Azure. Our company started migrating customers from the old sku to the new azure model and all was well. For the last six months or more, Defender for Server is now showing up on our NCE price file once more. After Microsoft told me so start selling the consumption sku, I am now unsure what my next move should be. Are you selling both products? Is there a use case for each? Anyone had any experience wit this issue? Thanks in advance! Read More
Teams cloud app policy template not showing
Below should be available since last year, but i dont see them in my list.
Access level change (Teams): Alerts when a team’s access level is changed from private to public.External user added (Teams): Alerts when an external user is added to a team.Mass deletion (Teams): Alerts when a user deletes a large number of teams
We have the Microsoft 365 E5-security license. Do we need another license for that ?
Below should be available since last year, but i dont see them in my list. Access level change (Teams): Alerts when a team’s access level is changed from private to public.External user added (Teams): Alerts when an external user is added to a team.Mass deletion (Teams): Alerts when a user deletes a large number of teamsWe have the Microsoft 365 E5-security license. Do we need another license for that ? Read More
FILTER Function – “include” parameter as string from another cell
Hi All
I have a table on Worksheet A, unimaginatively named “Table1”, with fields including “CODE”.
On Worksheet B, in cell C1, I have this formula:
=FILTER(Table1,(Table1[CODE]=”T1″) + (Table1[CODE]=”P1″))
This works fine i.e. it shows a new table with all the rows where CODE = “T1” or “P1”. Lovely.
Also on Worksheet B, in cell A1, I have the following string:
(Table1[CODE]=”T1″) + (Table1[CODE]=”P1″)
I have created this string using logic based on the data in Table1. On another day, the string literals might change and there may be more OR-ed elements, perhaps:
(Table1[CODE]=”ENG002″) + (Table1[CODE]=”BBBB”) + (Table1[CODE]=”Z YW”)
Essentially this string is volatile and I don’t want to hard-code it as in the first example.
How can I use successfully the string in cell A1 as the ‘include’ parameter to the FILTER function?
I tried:
=FILTER(Table1,A1)
but this gives #VALUE!
I thought INDIRECT might work but:
=FILTER(Table1,INDIRECT(A1))
gives a #REF!
I think I am missing something obvious but can’t see it. Can you help, at all? Thanks VM.
Peter
Hi All I have a table on Worksheet A, unimaginatively named “Table1”, with fields including “CODE”.On Worksheet B, in cell C1, I have this formula:=FILTER(Table1,(Table1[CODE]=”T1″) + (Table1[CODE]=”P1″))This works fine i.e. it shows a new table with all the rows where CODE = “T1” or “P1″. Lovely. Also on Worksheet B, in cell A1, I have the following string:(Table1[CODE]=”T1″) + (Table1[CODE]=”P1″) I have created this string using logic based on the data in Table1. On another day, the string literals might change and there may be more OR-ed elements, perhaps:(Table1[CODE]=”ENG002″) + (Table1[CODE]=”BBBB”) + (Table1[CODE]=”Z YW”)Essentially this string is volatile and I don’t want to hard-code it as in the first example. How can I use successfully the string in cell A1 as the ‘include’ parameter to the FILTER function? I tried:=FILTER(Table1,A1)but this gives #VALUE!I thought INDIRECT might work but:=FILTER(Table1,INDIRECT(A1))gives a #REF! I think I am missing something obvious but can’t see it. Can you help, at all? Thanks VM.Peter Read More
Navigating Azure Bot Networking: Key Considerations for Privatization
Navigating the complexities of cloud solutions can be a daunting task, and Azure Bot Solutions are no exception. Many customers face the challenge of privatizing their bot’s messaging endpoint, only to encounter communication breakdowns with the channel—resulting in 502 errors and unresponsive bots.
While the necessity of a public messaging endpoint is outlined in the Bot Framework Security and Privacy Frequently Asked Questions – Bot Service | Microsoft Learn, I aim to share insights and practical considerations from my experience working with clients. Please reach to Microsoft Support for more guidance.
Privatizing a bot solution involves more complexity than traditional web applications or APIs, where clients make direct calls to Web Applications. In a bot solution, users do not directly interact with Bot/Web App; instead, their requests are orchestrated and proxied through a channel connector. Additionally, bots can send messages asynchronously, facilitated by these channels. An example of Network Isolation in Azure Web App, includes all components that can made available within customer managed network.
Bot as a Solution
Clients: User-facing application used to consume/converse with Bot solutions. Examples include Web Chat Widget, Teams, Slack etc.
The Bot Service: This managed SaaS umbrella includes configuration management, channel services and token services. Services are made available with the <service>.botframework.com endpoints.
The Bot Application: Using the Bot SDK or Composer, you create an HTTP-based application that encapsulates your functional and conversational logic, including recognition, processing, and storage. The Bot application operates using the Bot Framework Activity Specification.
Channel Connectors: While Azure Bot Service offers two primary channels (Direct Line and Webchat), it also allows extensibility for other clients/channels. Channel connectors are implemented by their respective owners and operate within their managed data centers. The messaging endpoint is not exposed to end users; instead, users connect through channel connectors that manage user sessions, activity orchestration, and authentication. Different clients, such as Teams and Slack, represent messages and activities uniquely. Since Bot SDK applications understands and responds with activities as defined in the Bot Framework Activity Specification, channels are responsible for transforming activities and forwarding them to the application.
References:
Basics of the Microsoft Bot Framework – Bot Service | Microsoft Learn
Channels reference – Bot Service | Microsoft Learn
Create a bot in Microsoft Teams – Teams | Microsoft Learn
Simplified view of Directline Bot (Web Chat: Full-featured bundle):
Simplified view of Teams Bot Solution:
The Direct Line and Teams clients do not directly call your bot’s endpoint; instead, their requests are proxied through the Direct Line Service or Teams Channel Connector. When you privatize your bot application/endpoint, there is a high potential of disrupting communication between the channel connector and the bot application. Since these channel connectors operate within managed data centers, the requests from channels to your bot will traverse the public internet. This is why a public messaging endpoint is essential for most channels.
Options to secure Bot Solution:
You can use gateways to expose a public IP address/endpoint and internally proxy to App service. For example, Azure App Gateway, Azure Firewall, Azure Front Door as upstream for App Service. Note that these are not exhaustive options, you should be able to use any firewall/gateway which exposes public endpoint as upstream for private Bot App.
Refer – Secure your Microsoft Teams channel bot and web app behind a firewall – Azure Architecture Center | Microsoft Learn
The messaging endpoint in Bot will be the public endpoint exposed by the gateway. The AppService and AppGW for example can have private communication within Vnet.
Note that you may need additional steps to configure SSL certificate at your gateways.
If you want use AppService directly as your messaging endpoint, then you can enable public access and add restrictions to allow requests from intended channels.
For Directline, you can use “AzureBotService” Service tag as allowed restrictions.
Refer – Azure service tags overview | Microsoft Learn
Azure App Service access restrictions – Azure App Service | Microsoft Learn
For teams Bot you can whitelist IP used by Teams Servers – Secure your Microsoft Teams channel bot and web app behind a firewall – Azure Architecture Center | Microsoft Learn
For other channels, respective Channel connector Ips needs to be allowed.
Only if you are using DirectLine channel, then you can make the communications completely private using the DirectLine AppService Extension.
Explanations:
About network isolation in Azure AI Bot Service – Bot Service | Microsoft Learn
Direct Line App Service extension – Bot Service | Microsoft Learn
Guides:
Private Endpoint & Direct Line App Service Extension Configuration with Bot Services and App Service – Microsoft Community Hub
Deploying Bot APIs to intranet and internal web applications – Microsoft Community Hub
Other Security FAQs:
Bot Framework Security and Privacy Frequently Asked Questions – Bot Service | Microsoft Learn
Considerations with DirectLine App Service Extension (DL-ASE) | The Fully Insolated Directline Bot:
In simple explanation, we host the Channel Connector (Bot Service) in the AppService using Azure Web Sites Extensions | Microsoft Azure Blog. This way the users can directly connect to AppService URL (instead of directLine.botframework.com) which you can restrict with private endpoints. Note that the users need to have access to Vnet (via ExpressRoute, VPN etc.) where the AppService is deployed when you disable public access (Connect privately to an App Service apps using private endpoint – Azure App Service | Microsoft Learn)
This setup is only possible with Windows App service with .Net or Node Bot SDK for DirectLine Client.
While achieving full network isolation, your WebApp must handle WebSocket connections and execute functional logic, unlike the public DirectLine where WebSocket connections are managed by Azure Bot Service. (Pricing – Azure Bot Services | Microsoft Azure)
The Directline ASE client will use streaming/WebSockets API, and the HTTP rest APIs are not supported (API reference – Direct Line API 3.0 – Bot Service | Microsoft Learn)
Does not support Direct Line enhanced authentication – Bot Service | Microsoft Learn.
Troubleshooting the IPC(or named pipes) can become difficult. While on the public directline it is HTTP post between channel and WebApp which is easily tracked.
Hope this helps!
Microsoft Tech Community – Latest Blogs –Read More
Scalability in the Cloud: Migrating over 200 TB SAP Oracle Database to Azure
Overview
In this blog, we will cover the Azure solution and deployment approach to migrate very large Oracle databases (200 TB +) to Azure.
VM Solution
Azure Virtual Machine (VM) offers optimal vCPUs for managing Oracle license with high RAM ratio to accommodate large Oracle SGA, IO and network bandwidth to support transaction and batch workload. We tested both M192 and M176 SKUs with a 200 TB+ Oracle database.
In the below comparison, M176 is based on Intel Sapphire Rapids processor with DDR5 offers higher SAPS and 1.5 faster memory access than the M192 (Intel Cascade Lake based processor). M176 is also equipped with Azure Boost technology for improving both IO/Network throughput. In our testing, we found M176 offers higher SAPS, faster memory access, more IO & Network bandwidth.
VM SKU
Intel Chipset
vCPU
Memory GiB
IOPS/MBps
Network Bandwidth (Mbps)
M192idms_v2
Cascade Lake / DDR4
192
4096
80000/2000
30000
M176ds_4_v3
Sapphire Rapids / DDR5
176
3892
130000/4000
40000
System Global Area (SGA): Very Large Oracle databases benefit greatly from large SGA size. Customers with such sizeable Oracle workloads should deploy an Azure M-series with a minimum of 4 TB or more RAM size. Specific parameter recommendations below:
Set Linux Huge Pages to 75-90% of Physical RAM size
Set System Global Area (SGA) to 90% of Huge Page size
Set the Oracle parameter USE_LARGE_PAGES = ONLY
Storage Solution and Configuration
Azure has multiple storage options: Premium SSD, Premium SSDv2, Ultra and Azure NetApp Files (ANF). The chart below captures an overview of the storage characteristics for virtual machine Standard_M176ds_4_v3.
IO Metrics
Premium SSD
Premium SSDv2 (Pv2)
Azure NetApp Files (ANF)
IOPS
130K
130K
Millions
Throughput
4 GB/s
4 GB/s
>5 GB/s
Latency
Lower single digit
(in ms)
< 1 ms
< .4 ms
High Availability
Oracle Data Guard
Oracle Data Guard
Oracle Data Guard
Disaster Recovery
Oracle Data Guard
Oracle Data Guard
Oracle Data Guard and/or ANF Cross Region Replication
Storage Snapshot
Yes
No
Yes
Storage Manager
Automatic Storage Management (ASM)
ASM
dNFS
For 200 TB+ Oracle database workload, we tested the following storage configuration which optimally leverages both the network and IO channel from ANF and Premium SSDv2 (Pv2) respectively. Leveraging both ANF & Pv2 helped to optimize available VM throughputs effectively to meet and exceed the required IO requirements of such a large Oracle database.
Component
Disk Type
Number of Volumes
Size (TiB)
Total Throughput
GiB/s
Volume
Stripe Size
Oracle Home
Pv2
1
1
250
LVM
sapdata1-6
ANF
6
40 per volume
3000-4000
Individual
Oracle redo1-4
ANF
4
.5 per volume
500-2000
Individual
Oracle Fra
ANF
1
5
500-1000
Individual
Oracle Archive
Pv2
4
10
1500
LVM
64KB
Oracle Temp
Pv2 or Ephemeral
4
10
1500
LVM
64KB
Storage Deployment Approach
Both NFSv3 and NFSv4.1 are supported with Oracle Direct NFS (dNFS), we ultimately went with the combination of NFSv3 and Oracle Direct NFS. NFSv3 has been proven more reliable, more robust and is much less bug sensitive to dNFS than the newer NFS Version 4.1.
Application volume group for Oracle (AVG for Oracle) deploys all volumes required to install and operate the Oracle databases at enterprise scale, with optimal performance and according to best practices in a single step with optimized workflow. AVG for Oracle shortens Oracle database deployment time and ensures volume performance and stability, including the use of multiple storage endpoints (multiple IPs).
Oracle Database with Azure NetApp Files – Azure Example Scenarios | Microsoft Learn
Understand Azure NetApp Files application volume group for Oracle | Microsoft Learn
The Oracle data files can be distributed across sapdata volumes in round robins to avoid individual filesystem IO pressure.
High Availability Architecture
Azure offers a High Availability option by leveraging availability zones with SLA of 99.99. Most of the Azure regions provide VM SKU and low latency between the zones to deploy active-active HA setup across zones. However not every zone has got the required VM SKU so it is important to find out required VM SKU availability by running SAP-on-Azure-Scripts-and-Utilities/Get-VM-by-Zones at main · Azure/SAP-on-Azure-Scripts-and-Utilities (github.com) from your subscription. You can find out low latency zones by running SAP-on-Azure-Scripts-and-Utilities/AvZone-Latency-Test at main · Azure/SAP-on-Azure-Scripts-and-Utilities (github.com) . Combination of SKU availability and low latency script can guide you to identify zones that can offer active-active zone pair for HA deployment.
It is important to note that each subscription may be mapped to different physical zones. You can find out physical zone mapping using Azure API Subscriptions – List Locations – REST API (Azure Resource Management) | Microsoft Learn.
Below picture provides HA architecture.
Data Protection Strategy
Customers can leverage a combination of ANF snapshot on the primary VM and weekly Oracle streaming backups on HA stand-by. We recommend the ANF snapshot tool provided by Microsoft known as the application consistent snapshot. Both snapshot and cloning can be executed in minutes, regardless of database size. Cloned volumes can be leveraged for system copy, but it is critical that production and QA VMs be on the same physical zone to ensure low latency between them.
Technically, ANF does not prevent you from mounting NFS volumes across zones, so it is important that operational procedure established to keep both zone & ANF storage on same side.
Backup & Snapshot Approach
Domain
Backup Component
Backup Options
Frequency
Ran against
Load on DB VM
Primary Region
DB
snapshot (azacsnap)
4 hours
HA Primary VM
Low
RMAN Backup
Daily incremental and weekly full
HA Stand-by VM
Low
Log
Archive Log Backup
15 minutes
HA Primary VM
Low
DR Region
DB
Oracle Data Guard
Current
n/a
Low
Database Restore
Failure
Recovery Option
Recovery Time
Comment
DB Level
Snapshot
Log (roll-forward)
In Minutes
1st Option
RMAN Restore
Log (roll-forward)
In Hours
2nd Option
Region Wide
Oracle Data Guard
In Minutes
1st Option
RMAN Restore
Log (roll-forward)
In Hours
2nd Option
Migration Approach
Depending on on-prem HW, OS/DB and SAP software levels, migration falls into either Homogeneous or Heterogeneous migration category.
We will cover a heterogeneous migration approach in a separate blog and discuss about how to reduce downtime and improved benefits for very large databases.
In the homogenous migration approach, smaller databases can be migrated using backup and restore. Larger database can be migrated by setting up Oracle Data Guard (ODG) replication.
Customer should run Azure Quality Check against deployed solution to identify and address any Azure best practices deviation.
Testing Approaches
Customers have leveraged Oracle Real Application Testing (RAT) option to perform real-world testing of the Oracle Database. By capturing production workloads during the peak period and replaying on Azure can help identify the required VM SKU and storage solution. Customer leveraged Azure Monitoring Dashboards and RAT generated outputs to analyze and conclude the test results and move forward confidently to migrate the Oracle on SAP system to Azure.
The RAT test covers Oracle database performance requirements. It is highly recommended to run SAP level volume and performance testing to ensure that end-to-end SAP processing meets and exceeds performance KPIs.
System Performance
Azure innovations such as Mv3 (Intel sapphire rapids /DDR5), Azure Boost for improving IO & Network Throughput, ANF storage solutions with sub-milli second latency with DNFS combined with Oracle advanced compression has resulted in 30-50% of SAP processing improvement on Azure.
Conclusion
Azure has led SAP on Azure solutions over the years and reached new heights every year by bringing over advanced VM SKU, Storage/Network solution, end to end architecture and deployment approaches to successfully deploying the largest Oracle database on SAP to Azure. Azure successfully hosts 200 TB+ SAP on Oracle database!
Useful Links
Below are key SAP Notes and Microsoft documentation for a successful Azure migration
2039619 – SAP Applications on Microsoft Azure using the Oracle Database: Supported Products and Versions – SAP for Me
1928533 – SAP Applications on Microsoft Azure: Supported Products and Azure VM types – SAP for Me
Oracle Azure Virtual Machines database deployment for SAP workload | Microsoft Learn
General performance considerations for Azure NetApp Files | Microsoft Learn
Understand Azure NetApp Files application volume group for Oracle | Microsoft Learn
SAP-on-Azure-Scripts-and-Utilities/QualityCheck/Readme.md at main · Azure/SAP-on-Azure-Scripts-and-Utilities · GitHub
1672954 – Oracle 11g, 12c, 18c and 19c: Usage of hugepages on Linux – SAP for Me
Co-Authors
Microsoft Tech Community – Latest Blogs –Read More
Microsoft MVPs – Celebrating 10 Years
As we commemorate over 30 years of the Microsoft Most Valuable Professionals (MVP) Program, we want to sincerely acknowledge the efforts those who have achieved award milestones of 10, 15, 20 years, and more. This journey wouldn’t be possible without your support and dedication to community leadership – thank you!
This blog features a few MVPs achieving their 10-year milestone. Read on to find out what these MVPs have to say about their experience and time in the program.
What has motivated you to remain committed to the MVP program for the past 10 years?
Prajwal Desai Security/Windows and Devices MVP (India): “Becoming an MVP was a dream come true for me because helping others is my passion. Every year, I work hard to maintain this status, and being part of the Microsoft community continues to inspire and motivate me. It pushes me to keep learning, publishing, and embracing new challenges. I’m eager to remain in this role, as the MVP program provides me with incredible knowledge and the opportunity to learn from other professionals. It’s a truly rewarding experience!”
Samantha Villarreal Torres M365 MVP (Mexico): “The affection for people, knowing that I can contribute to their professional growth in some way and that this benefits each of them and therefore their families.
Stefan Malter M365 MVP (Germany): “As an author, media trainer and Microsoft MVP, I appreciate the exchange with the product groups. They hear me when it comes to the perspective and needs of teachers and lecturers. Technical developments like AI and cloud computing are a big challenge for the education sector worldwide. Being part of the MVP program also gives me important insights that allow me to translate the digital progress for my less tech-savvy target group.”
Josh Garverick Microsoft Azure/Developer Technology MVP (US): “There are two things that continue to draw me into the community, being able to see my impact on those who I talk with and the ability to help shape the many different products that Microsoft has through conversations with the Product Groups. I love talking with people about all sorts of technologies and it makes me feel great if I can help them out in their understanding of a service or product!”
Cathrine Wilhelmsen Data Platform MVP (Norway): “First and foremost: the community. Becoming an MVP, driven by a passion to help and contribute. Over time, I’ve gained incredible friendships and the joy of sharing and discussing new technology with talented, like-minded peers. This community fuels my enthusiasm while the MVP title has opened doors, added value to my work, and strengthened client trust.”
Chris Gomez Developer Technology MVP (US): “My goals align well with the MVP program, focusing on educational outreach and technical content. I aim to help developers solve problems through videos, presentations, and articles, often inspired by challenges I’ve faced. I was mentored by great teachers and senior developers, and I feel a responsibility to pass that knowledge forward, especially to underrepresented communities. After ten years as an MVP, I’m honored and grateful to continue helping others save time and build on what I’ve learned.”
Yolanda Cuesta M365 MVP (Spain): “I Enjoy a lot sharing knowledge and also, I have found a lot of members in the community from whom I learnt a lot as well. I like technology so much and love to use it in different fields and ways. As an MVP I have access to the latest Microsoft modern work tools and can experiment with them, sometimes even before they are available to the public.”
What impact has the MVP community had on your personal and professional growth?
Prajwal Desai Security/Windows and Devices MVP (India): “Being an MVP and being part of the Microsoft community, my knowledge and skills have been greatly enhanced. The MVP community is a great source of learning and inspiration for me. Each MVP possesses distinct abilities, and collectively, we exchange a wealth of knowledge and intelligence. Personally, the MVP community inspires me to learn and contribute, and professionally, I have noticed that my knowledge at work and the way I handle things have improved.”
Samantha Villarreal Torres M365 MVP (Mexico): “Being an Microsoft MVP has been a pivotal force in shaping my career and life. As an MVP, I gained unique access to Microsoft’s latest technologies, opening doors to professional growth and influential connections. It enabled me to speak at major events like the Microsoft Innovation Tour and the AI Tour 2024, sharing insights on Office365 and Copilot with global audiences. Leading the “Mujeres TICS Latam” community, I empowered professionals across 23 countries with certifications and training, enriching my network and impact. Being an MVP not only elevated my career but also strengthened my role as a tech community leader and educator.
Stefan Malter M365 MVP (Germany): “The MVP program has changed my life in so many ways. Getting to other dedicated enthusiasts has led to many helpful conversations and special connections over the years. But I also see the MVP award as a confirmation for my community work. It has become a yearly boost for my self-confidence, and the associated benefits allow me to discover new technical possibilities that otherwise would not be part of my learning content.”
Josh Garverick Microsoft Azure/Developer Technology MVP (US): “It’s difficult to verbalize, honestly. When I first joined the MVP program, I was very reserved and just seeing folks like Scott Hanselman and Scott Guthrie blew my mind. I’ve since become much more comfortable with myself as well as more outgoing to other members in the community. Professionally it helped me learn a ton and apply what I’ve learned, advancing myself to where I am today. Personally, I’m more capable of speaking to groups of people and have a better level of self-confidence.”
Cathrine Wilhelmsen Data Platform MVP (Norway): “Being involved in the community and becoming an MVP truly changed my life. Ten years ago, I was a shy introvert with social anxiety, but now I thrive on stage, enjoy organizing large events, and embrace social interactions (though I still need alone time to recharge!). The community helped me find “my people,” allowing me to break out of my shell and become confident, both personally and professionally.”
Chris Gomez Developer Technology MVP (US): “Fellow MVPs are a tremendous source of knowledge, both through their online content and local involvement. Early in my journey, I learned from their articles, talks, and now through videos, projects, and courses across a wide range of technologies. Locally, I was inspired by the MVPs in the Philly.NET community, which welcomed me and gave me the platform for my first technical talk. This community, led by MVPs, became the foundation for my career growth, helping me evolve from a developer to a software architect.”
Yolanda Cuesta M365 MVP (Spain): “I Enjoy a lot sharing knowledge and, I have found a lot of members in the community from whom I learnt a lot as well. I like technology so much and love to use it in different fields and ways. As an MVP I have access to the latest Microsoft modern work tools and can experiment with them, sometimes even before they are available to the public.
What advice would you give to new MVPs just starting their journey?
Prajwal Desai Security/Windows and Devices MVP (India): “I would tell all the new MVPs that technology is developing far more quickly than it did in the past. In addition to improving your abilities with certifications, you also need to keep up with these advancements. I always recommend subscribing to Microsoft blogs, MVP newsletters, following your MVP gurus on social platforms, and attending free training provided by Microsoft. This will enhance the knowledge and skills that are required for career progression. Lastly, I would say to all the new MVP’s, You’re awesome and you are in the perfect community. Keep learning, stay motivated and help others, the rewards will follow you.”
Samantha Villarreal Torres M365 MVP (Mexico):“ Being an MVP is a gift for the work we already do, take advantage of it wisely, keep sharing your knowledge and skills, many doors will open on your way, create your own opportunities do not wait for them to come, stay active, be clear about your goals and your mind will work on them automatically, you will achieve it, you just have to be consistent and enjoy the journey, Celebrate your achievements with those who value them, improve your skills and mark the path to follow for many generations to come.”
Stefan Malter M365 MVP (Germany): “At first, I found it hard to understand how extensive and valuable the MVP program really is. Take your time to discover all the possibilities, but do not feel obliged to embrace everything at once. There is also no need to be shy when it comes to contacting the product groups or visiting one of the Microsoft events. All people are really open-minded and nice and – as we say in Germany – also only cook with water.”
Josh Garverick Microsoft Azure/Developer Technology MVP (US): “Ask LOTS of questions, don’t be afraid to reach out to other MVPs either in person or via LinkedIn, and most of all, don’t be afraid to say hi to people you recognize from conference talks, online videos, or social media!”
Cathrine Wilhelmsen Data Platform MVP (Norway): “Don’t burn yourself out. You became an MVP because you love what you’re doing, but you don’t always have to do more, bigger, better, faster. Keep doing what you love, stay authentic, and strive for a balance in life. With that in mind, enjoy the ride, keep doing what you love, help as many people as you can, and have fun!”
Chris Gomez Developer Technology MVP (US): “As a new MVP, stay true to yourself and take advantage of the great opportunities available, including engaging with product teams. While imposter syndrome is real, your feedback is valuable and represents many voices. Don’t hesitate to explore other tech areas that interest you, and use the resources available, like distribution lists and video libraries. Always pay attention to NDAs when sharing information and ask if you’re unsure. Most importantly, enjoy the journey and maintain a healthy work-life balance—you’re recognized for your passion to help others.”
Yolanda Cuesta M365 MVP (Spain): “I’d like to motivate them to be consciously active and keep and open-minded approach in regard to new jobs, experiences and connection opportunities. As an example, in collaboration with other female technical experts, I am a founding member of the W4TT (Women-For-Technical-Talks), a community that offers support and mentorship to women that aim to become public speakers at technical events. The idea of creating this initiative came out as a result of some conversations with MVPs I connected with.”
How do you balance your MVP activities with your professional & personal commitments?
Prajwal Desai Security/Windows and Devices MVP (India): “Everyone goes through ups and downs and balancing personal life with professional commitments can be difficult. In the past 10 years, I have learnt how to prioritize things so that I can balance my MVP responsibilities with professional commitments. For example, every day I spent at least 3–4 hours on personal learning after my job shift. In another instance, I would mark a meeting for this coming Monday on my calendar and make sure no other business took priority during that time. At times during my office hours when I couldn’t attend a MVP meeting, I made sure to watch the recorded meeting after my work hours.”
Samantha Villarreal Torres M365 MVP (Mexico): “Balancing a remote career and personal life requires dedication, responsibility, and adapting to changing circumstances. Establishing realistic goals, work schedules, and prioritizing mental and physical health are essential. Disconnecting from technology periodically helps maintain creativity and reduce stress. Family is a major motivator, as setting a positive example for the next generation drives the pursuit of balance and purpose in life.”
Stefan Malter M365 MVP (Germany): “I am always aware of my versatile role as a community leader and have found my way to balance all interests. I can be an independent author and media trainer with critical views on technological developments and media competency. At the same time, I can appreciate the constructive exchange with professionals at Microsoft and discuss the challenges we all face in this fast-changing world. This is how this program – to me – has become THE Most Valuable Puzzle.”
Josh Garverick Microsoft Azure/Developer Technology MVP (US): “It’s certainly not the easiest thing to do, but I am fortunate to have such a great employer who allows me to work with folks at Microsoft and GitHub. I am encouraged to submit talks to and attend conferences, and community accomplishments are celebrated amongst our colleagues. I also guard my free time, making sure I prioritize family events and other non-technical activities. Sometimes I just need to hack on something for a couple of hours to scratch that itch, though.”
Cathrine Wilhelmsen Data Platform MVP (Norway) “I’m lucky to have an employer who values my role as an MVP and supports my contributions. It’s about finding the balance between what I love doing and what benefits my company and clients. Whether it’s sharing project insights through blog posts or bringing back valuable takeaways from events, I look for win-win opportunities where my efforts can help others while aligning with my work.”
Chris Gomez Developer Technology MVP (US): “Balancing professional commitments and community involvement requires respect for both. While it’s exciting to work on personal projects for the community, maintaining a healthy work/MVP/life balance is essential. Just as you protect confidential work information, it’s important to keep MVP program content and job responsibilities separate. What you learn over time can benefit both your career and the community, but it’s crucial to maintain clarity between the two to respect both your employer and the MVP program.”
Yolanda Cuesta M365 MVP (Spain): “This can be difficult because my MVP activities are usually done in my spare time, so I think it’s more a hobby I am passionate about than an obligation.”
Thank you MVPs!
Thanks to everyone who shared their experiences, and congratulations once more on reaching this 10-year milestone. If you are interested in becoming a Microsoft MVP, please visit our website to learn more.
Microsoft Tech Community – Latest Blogs –Read More
Sharepoint conditional formatting not applying to item view
Hello,
I require some help with a sharepoint list that I have setup to track issues. I have a column that has a priority label: Low, Medium or High based on a number value.
I have setup a conditional formatting see below in list view:
However, this conditional formatting does not show in the item view see below:
Is anyone able to tell me why and how do i correct this?
Many thanks
Hello,I require some help with a sharepoint list that I have setup to track issues. I have a column that has a priority label: Low, Medium or High based on a number value. I have setup a conditional formatting see below in list view: However, this conditional formatting does not show in the item view see below: Is anyone able to tell me why and how do i correct this? Many thanks Read More
suggestion: link a ToDo with a calendar event
Hi,
I wondered if this was an option, if not, I wanted to suggest it should be added.
So I have an event which I will have many instance in my calendar which is an appointment to go see a specific health specialist I see at least 3 times a week. Time changes and everything.
I have a ToDo associated that I repeat each time I have this event. I would like to be able to add into each of those appointment in calendar, a link to the ToDo to access it quickly. Is there a way to do it? If not, is it something that could be implemented?
Thanks in advance!!
Hi, I wondered if this was an option, if not, I wanted to suggest it should be added. So I have an event which I will have many instance in my calendar which is an appointment to go see a specific health specialist I see at least 3 times a week. Time changes and everything. I have a ToDo associated that I repeat each time I have this event. I would like to be able to add into each of those appointment in calendar, a link to the ToDo to access it quickly. Is there a way to do it? If not, is it something that could be implemented? Thanks in advance!! Read More
Old bug when switching between calendar and mail
Simple bug that have been happening for years and is still not solved.
In my outlook I often switch to calendar and email. Sometime, after messing with calendar by toggling views in calendar, for some reason, my outlook email pan switch view from Compact to Preview? Why?
This has been occurring for at least 6 years, with other emails / computers, also to colleagues.
Simple bug that have been happening for years and is still not solved. In my outlook I often switch to calendar and email. Sometime, after messing with calendar by toggling views in calendar, for some reason, my outlook email pan switch view from Compact to Preview? Why? This has been occurring for at least 6 years, with other emails / computers, also to colleagues. Read More
Restricted Planner View based on Filters
Hi everyone,
I have a main Planner plan where all tasks are visible. However, I don’t want all employees to see every task—each employee should only see tasks with a specific label assigned to them. My goal is to create a filtered version of the main plan that only shows tasks with certain labels, allowing employees to view their tasks in a separate, filtered plan.
This filtered plan should serve solely as a status overview, meaning no changes are made here; it simply reflects the main plan. If a task is moved between buckets in the main plan, I want this change to be mirrored automatically in the filtered plan.
Has anyone else managed to sync Planner tasks between plans or buckets? Any tips on setting up a more efficient flow, or suggestions for workarounds, would be really appreciated. Thank you!
Hi everyone,I have a main Planner plan where all tasks are visible. However, I don’t want all employees to see every task—each employee should only see tasks with a specific label assigned to them. My goal is to create a filtered version of the main plan that only shows tasks with certain labels, allowing employees to view their tasks in a separate, filtered plan.This filtered plan should serve solely as a status overview, meaning no changes are made here; it simply reflects the main plan. If a task is moved between buckets in the main plan, I want this change to be mirrored automatically in the filtered plan.Has anyone else managed to sync Planner tasks between plans or buckets? Any tips on setting up a more efficient flow, or suggestions for workarounds, would be really appreciated. Thank you! Read More
Cannot switch email from ToDo?
Hi,
I have work email and my personal email on my Outlook (Windows computer). When in ToDo, I cannot switch email, I have to go in calendar and select only the calendar I want to pick the ToDo I want. On phone, I can switch email in ToDo that is used.
On Windows Outlook, I couldn’t find this option after a long time. Does it exist? I have to go through calendar every time.
Hi, I have work email and my personal email on my Outlook (Windows computer). When in ToDo, I cannot switch email, I have to go in calendar and select only the calendar I want to pick the ToDo I want. On phone, I can switch email in ToDo that is used. On Windows Outlook, I couldn’t find this option after a long time. Does it exist? I have to go through calendar every time. Read More
suggestion: allow step to have more details
I wanted to suggest adding to each steps, more descriptions or even better, steps could contains sub step. Similar to point form where you can add infinite subpoints, this would be useful the same way it is to have many levels in a point form.
Or is there already a way to do so? I’m still new with this.
I wanted to suggest adding to each steps, more descriptions or even better, steps could contains sub step. Similar to point form where you can add infinite subpoints, this would be useful the same way it is to have many levels in a point form. Or is there already a way to do so? I’m still new with this. Read More
Does Microsoft a way of searching an organization’s code for Windows Enterprise organization?
Has there ever been a Microsoft product that allows one to search by code syntax (in internal cloud) of certain specific extension files–like Gitlab or Bitbucket can do?
At my organization, the Sharepoint and even the Azure’s Microsoft graph is moving toward “””Content””” and AI-based search. During this transition, I have found its often harder and harder to find certain files and emails in my organization based on symbols that document might contain.
These are examples of symbols that are really important to my business…
%m+% # perhaps the most important symbol to the accounting and finance
%w+% # the biweekly version of the same symbol
%>%
|>
Its come to the point, that I have a local OneDrive copy where thousands 1000 kb or less (1.2 Gb of storage space) file are saved just so I can use more simplistic search methods over these files. But I feel like its such a waste…. if there was a better method.
So I was curious if anyone at Microsoft has ever come up with a cloud based solution to the code search and exact search problem that has become worse as a result of reliance on OpenAI
Has there ever been a Microsoft product that allows one to search by code syntax (in internal cloud) of certain specific extension files–like Gitlab or Bitbucket can do? At my organization, the Sharepoint and even the Azure’s Microsoft graph is moving toward “””Content””” and AI-based search. During this transition, I have found its often harder and harder to find certain files and emails in my organization based on symbols that document might contain.These are examples of symbols that are really important to my business…%m+% # perhaps the most important symbol to the accounting and finance%w+% # the biweekly version of the same symbol%>%|> Its come to the point, that I have a local OneDrive copy where thousands 1000 kb or less (1.2 Gb of storage space) file are saved just so I can use more simplistic search methods over these files. But I feel like its such a waste…. if there was a better method. So I was curious if anyone at Microsoft has ever come up with a cloud based solution to the code search and exact search problem that has become worse as a result of reliance on OpenAI Read More