Month: February 2024
Partner Blog | Empowering Partners: Celebrating Excellence in Tech
Authored by Leona Locke, Director, GTM Benefits and Partner Engagement, with contributions from Regina Johnson, (RJohnson_Microsoft) Senior Manager and Community Lead.
Our partner community is enriched by its diversity, and we are committed to strengthening our collective capacity. In this blog, we will share a few opportunities for partners to access information, resources, and capital that are designed not only to empower you to achieve your goals, but also connect you to collaborators with shared visions and aspirations.
Partner-led associations at Microsoft
Partner-led Associations are nonprofit organizations led by Microsoft partners and technology company business owners. They boast strong membership bases and provide a direct pipeline to channels driving partner engagement, professional skilling, P2P opportunities, and enablement to increase partner knowledge, growth, and sales.
Drive growth for your organization by engaging individually and collectively with partner-led associations such as the Black Channel Partner Alliance (BCPA), the International Association of Microsoft Channel Partners (IAMCP), the Women in Tech Network (WIT), and Women in Cloud (WIC).
Continue reading here
**Don’t forget to join our Partner-led communities to stay connected!**
Inclusive Growth Discussion Board
Microsoft Tech Community – Latest Blogs –Read More
Make demo typing easy with DemoType in ZoomIt v8.0
DemoType is a ZoomIt feature that allows you to synthesize keystrokes from a script. Queue code blocks or Copilot prompts and send them to target windows during a live demo. Additionally, ZoomIt counteracts editor specific auto formatting, allowing a script to be interchangeable between target windows. Watch a video overview here.
Standard mode
Default behavior immediately begins injecting keystrokes to the target window upon pressing the DemoType hotkey (e.g. Ctrl + 7). No user input is required. ZoomIt will simply run to the end of the current text segment and exit, returning control to the user.
User-driven mode
You can select the option to drive input with typing. Toggle this behavior in the ZoomIt options dialog. One user key press will trigger one output character in 1:1 injection ratio.
You can adjust the injection ratio between 1:1, 1:2, and 1:3 with the speed slider in the ZoomIt options dialog. Upon reaching the end of the current text segment in user-driven mode, DemoType will continuie blocking keyboard input until you press space bar.
Input Script
Your script can be sourced from a file or from the clipboard. To use the clipboard, you must put the control keyword [start] at the beginning of your selection. This deliberate safety prefix is meant to stop you from unintentionally presenting sensitive data in the clipboard.
To use a file, select it from the ZoomIt options dialog. If you were previously sourcing input from the clipboard and would like to switch to file, set the clipboard to some text that doesn’t include the [start] prefix, or clear the clipboard via Windows Settings > System > Clipboard > Clear clipboard data.
The [end] control keyword is used to split your script into text segments. It is important to note that DemoType will look to the left and right of an [end] and absorb a single newline from each side if present. This allows you to format your script and pad an [end] with newlines that won’t render.
Cancelling DemoType
To cancel an active session, press escape. DemoType will also quit if focus is changed to a different window. Terminating a DemoType session mid text segment will hop to the next text segment. To hop back to the previous text segment, enter the DemoType hotkey with the Shift key in the opposite mode (e.g. Ctrl + Shift + 7).
Control Keywords
Use the following keywords throughout your script to control behavior.
[start] is a safety prefix only used when tagging clipboard data as a viable script
[end] is a delimiter to segment your script into snippets
[enter], [up], [down], [left], [right] synthesizes keystrokes
[pause:n] synthesizes a pause of n seconds
[paste] with a closing [/paste] allows you to inject a chunk of text via the clipboard
Microsoft Tech Community – Latest Blogs –Read More
Microsoft Copilot for Sales is here!
Microsoft Copilot for Sales is here! The next step in the evolution of Viva Sales and Sales Copilot was released on February 1, 2024 for Dynamics 365 Sales and Salesforce CRM. You can read the announcement about the general availability of Copilot for Sales (and Copilot for Service) and learn what’s new.
With Copilot for Sales, we bring together the power of Copilot for Microsoft 365 with role-specific insights and actions to streamline business processes, automate repetitive tasks, and unlock productivity. We still provide the flexibility to integrate with Microsoft Dynamics 365 Sales and Salesforce to get more done with less effort.
Check out the Copilot for Sales Adoption Center where we provide resources to deploy, use, and scale Copilot for you, your team, and your organization!
Get started
Ready to join us and other top-performing sales organizations worldwide? Reach out to your Microsoft sales team or visit our product web page.
Ready to install? Have a look at our deployment guides for Dynamics 365 Sales users or for Salesforce users.
Stay connected
Keep up to date on the latest improvements at https://aka.ms/salescopilotupdates and learn what we’re planning next. Join our community in the community discussion forum and we always welcome your feedback and ideas in our product feedback portal.
Microsoft Tech Community – Latest Blogs –Read More
Sync Up Episode 08: From Waterfalls to Weekly Releases with Steven Bailey & John Selbie
Ever wanted to learn more about what goes into making OneDrive? Then this is the podcast for you! Join Stephen Rice and Arvind Mishra as they talk with CVP for OneDrive Engineering, Steven Bailey and engineering Manager John Selbie! This month, we’re talking about how OneDrive became the product that it is today, how engineering itself has evolved at Microsoft, and how the OneDrive engineering team strives to deliver a product that surpasses your expectations!
Microsoft Tech Community – Latest Blogs –Read More
Announcing OpenAI text-to-speech voices on Azure OpenAI Service and Azure AI Speech
At OpenAI DevDay on November 6th 2023, OpenAI announced a new text-to-speech (TTS) model that offers 6 preset voices to choose from, in their standard format as well as their respective high-definition (HD) equivalents. Today, we are excited to announce that we are bringing those models in preview to Azure. Developers can now access OpenAI’s TTS voices through Azure OpenAI and Azure AI Speech services. Each of the 6 voices has its own personality and style. The standard voices models are optimized for real-time use cases, and the HD equivalents are optimized for quality.
These new TTS voices augment capabilities, such as building custom voices and avatars, already available in Azure AI and allow customers to build entirely new experiences across customer support, training videos, live-streaming and more.
This capability allows developers to give human-like voices to chatbots, audiobook or article narration, translation across multiple languages, content creation for games and offers much-needed assistance to the visually impaired.
Click here to see these voices in action:
The new voices will support a wide range of languages from Afrikaans to Welsh, and the service can cater to diverse linguistic needs. For a complete list of supported languages, please follow this link.
The table below shows the 6 preset voices:
Voice
Sample Text
Sample Language (s)
Sample Audio
(Standard)
Sample Audio
(HD)
Alloy
The world is full of beauty when your heart is full of love.
Le monde est plein de beauté quand votre cœur est plein d’amour.
English- French
https://nerualttswaves.blob.core.windows.net/oai-samples/test_oai_alloy.wav
https://nerualttswaves.blob.core.windows.net/oai-samples/test_oai_alloyHD.wav
Echo
Well, John, that’s very kind of you to invite me to your quarters. I’m flattered that you want to spend more time with me.
Des efforts de collaboration entre les pays sont nécessaires pour lutter contre le changement climatique, protéger les océans et préserver les écosystèmes fragiles.
English – French
https://nerualttswaves.blob.core.windows.net/oai-samples/test_oai_echo.wav
https://nerualttswaves.blob.core.windows.net/oai-samples/test_oai_echoHD.wav
Fable
Success is not the key to happiness, but happiness is the key to success.
Erfolg ist nicht der Schlüssel zum Glück, aber Glück ist der Schlüssel zum Erfolg.
English – German
https://nerualttswaves.blob.core.windows.net/oai-samples/test_oai_fable.wav
https://nerualttswaves.blob.core.windows.net/oai-samples/test_oai_fableHD.wav
Onyx
Conserving water resources through efficient usage and implementing responsible water management practices is crucial, especially in regions prone to drought and water scarcity.
Die Einführung nachhaltiger Praktiken in unserem täglichen Leben, wie z. B. die Einsparung von Wasser und Energie, die Auswahl umweltfreundlicher Produkte und die Reduzierung unseres CO2-Fußabdrucks, kann erhebliche positive Auswirkungen auf die Umwelt haben.
English – German
https://nerualttswaves.blob.core.windows.net/oai-samples/test_oai_onyx.wav
https://nerualttswaves.blob.core.windows.net/oai-samples/test_oai_onyxHD.wav
Nova
Success is not the key to happiness, but happiness is the key to success.
El éxito no es la clave de la felicidad, pero la felicidad es la clave del éxito.
English – Spanish
https://nerualttswaves.blob.core.windows.net/oai-samples/test_oai_nova.wav
https://nerualttswaves.blob.core.windows.net/oai-samples/test_oai_novaHD.wav
Shimmer
In this moment, I realized that amid the chaos of life, tranquility and peace can always be found.
人生は、一度きりのチャンスです。失敗しても、次に向けて立ち上がりましょう。
English – Japanese
https://nerualttswaves.blob.core.windows.net/oai-samples/test_oai_Shimmer.wav
https://nerualttswaves.blob.core.windows.net/oai-samples/test_oai_ShimmerHD.wav
In addition to making these voices available in Azure OpenAI Service, customers will also find them in the Azure AI Speech with the added support for Speech Synthesis Markup Language (SSML) SDK.
Getting started
With these updates, we’re excited to be powering natural and intuitive voice experiences for more customers.
For more information:
Try the demo from AI Studio
See our documentation (AI Speech Learn doc link)
Check out our sample code
Microsoft Tech Community – Latest Blogs –Read More
IPv6 Transition Technology Survey
The journey toward IPv6 only networking is challenging, and today there are several different approaches to the transition from IPv4 to IPv6 with multiple dual-stack or tunneling stages along the way. As we prioritize future Windows work, we would like to know more about what customers like you are using to support your own IPv6 deployments. We have published the survey below to ask you a few questions that will contribute to that exercise.
The survey is fairly short and anonymous (though we left a field for sharing your contact information if you would be ok with direct follow up). Thank you in advance for your responses; your experiences will help us focus on what you find most valuable in our future work.
Microsoft Tech Community – Latest Blogs –Read More
To BE or not to be – case sensitive using Power BI and ADX/RTA in Fabric
To BE or not to be – case sensitive
Power BI, Power Query and ADX/RTA in Fabric
Summary
You use these combination – Power BI including Power query with data coming from Kusto/ADX/RTA in Fabric.
Is your solution case sensitive? Is “PoKer” == “poker”?
Depends on who you ask,
Power Query says definitely no, Power BI says definitely yes and Kusto says that
“PoKer” == “poker” is false but “PoKer” =~ “poker” is true.
What about your data? Is the same piece of information always written in the same way or is it sometimes Canada and some other times Canada?
In this article I’ll highlight the challenges of using mixed case data and navigating the differences between the different technologies.
Power BI
Chris Webb in his blog writes:
Case sensitivity is one of the more confusing aspects of Power BI: while the Power Query engine is case sensitive, the main Power BI engine (that means datasets, relationships, DAX etc.) is case insensitive
In this post, Chris mentioned a way to do cases insensitive comparisons in PQ but it is not supported in Direct Query.
Kusto/ADX/RTA
Kusto is case sensitive. Every function and language term must be written in the right case which is usually all lower case.
The same with tables, functions and columns.
What about text comparisons?
The KQL language offers case sensitive and case insensitive comparisons:
== vs. =~, != vs. !~ , has_cs vs. has , in vs. in~, contains_cs vs. contains
Comparisons created in Power BI or in Power Query and folded to KQL
The connector uses by default case sensitive comparisons: has_cs and ==
You can change this behavior by using settings in the connection.
Mixed case data
This is the trickiest topic. I attach a PBI report that shows a list of colors in two pages.
The slicer on the first page is showing the color Blue twice. If you edit the query, you can see that there are some products that have the color as “blue” all lower case.
This is confusing PBI and it shows two different variations that look exactly the same.
If you filter on either version in the first page, you get the same value which is the value of the version “Blue”.
For the second page I created a copy of the query where the versions of Blue are well separated, and you can see the total for each one. you can see that the total shown on the first page is just for the version “Blue”
What can you do in such cases (pun intended)
I create a third version of the query when I converted all color names to be proper cased.
The M function for right case couldn’t be used in Direct Query so I added a KQL snippet using the M function Value.NativeQuery.
The snippet is
| extend ColorName = strcat(toupper(substring(ColorName,0,1)),substring(ColorName,1))
In the third page you can see that filtering “Blue” shows the total values for “Upper blue” and “Lower blue” as they appear in the second page.
So, if you have a column in mixed case you must convert all values to a standard case.
Microsoft Tech Community – Latest Blogs –Read More
GenAI Solutions: Elevating Production Apps Performance Through Latency Optimization
As the influence of GenAI-based applications continues to expand, the critical need to enhance their performance becomes ever more apparent. In the realm of production applications, responses are expected within a range of milliseconds to seconds. The integration of Large Language Models (LLMs) has the potential to extend response times of such applications to few more seconds. This blog intricately explores diverse strategies aimed at optimizing response times in applications that harness Large Language Models on the Azure platform. In broad context, the subsequent methodologies can be employed to optimize the responsiveness of Generative Artificial Intelligence (GenAI) applications:
Response Optimization of LLM models
Designing an Efficient Workflow orchestration
Improving Latency in Ancillary AI Services
Response Optimization of LLM models
The inherent complexity and size of Language Model (LLM) architectures contribute substantially to the latency observed in any application upon their integration. Therefore, prioritizing the optimization of LLM responsiveness becomes imperative. Let’s now explore various strategies aimed at enhancing the responsiveness of LLM applications, placing particular emphasis on the optimization of the Large Language Model itself.
Key factors influencing the latency of LLMs are
Prompt Size and Output token count
A token is a unit of text that the model processes. It can be as short as one character or as long as one word, depending on the model’s architecture. For example, in the sentence “ChatGPT is amazing,” there are five tokens: [“Chat”, “G”, “PT”, ” is”, ” amazing”]. Each word or sub-word is considered a token, and the model analyses and generates text based on these units. A helpful rule of thumb is that one token corresponds to ~4 characters of text for common English text. This translates to ¾ of a word (so 100 tokens ~= 75 words).
A deployment of the GPT-3.5-turbo instance on Azure comes with a rate limit of around 120,000 tokens per minute, equivalent to approximately 2,000 tokens per second ( Details of TPM limits of each Azure OpenAI model are given here . It is evident that the quantity of output tokens has a direct impact on the response of Large Language Models (LLMs), consequently influencing the application’s responsiveness. To Optimize application response times, it is recommended to minimize the number of output tokens generated. Set an appropriate value for the max_tokens parameter to limit the response length. This can help in controlling the length of the generated output.
The latency of LLMs is influenced not only by the output tokens but also by the input prompts. Input prompts can be categorized into two main types:
Instructions, which serve as guidelines for LLMs to follow, and
Information, providing a summary or context for the grounded data to be processed by LLMs.
While instructions are typically of standard lengths and crucial for prompt construction, but the inclusion of multiple tasks may lead to varied instructions, and ultimately increasing the overall prompt size. It is advisable to limit prompts to a maximum of one or two tasks to manage prompt size effectively. Additionally, the information or content can be condensed or summarized to optimize the overall prompt length.
Model size
The size of the LLMs is typically measured in terms of its parameters. A simple neural network with just one hidden layer has a parameter for each connection between nodes (neurons) across layers and for each node’s bias. The more layers and nodes a model have, the more parameters it will contain. A larger parameter size usually translates into a more complex model that can capture intricate patterns in the data.
Applications frequently utilize Large Language Models (LLMs) for various tasks such as classification, keyword extraction, reasoning, and summarization. It is crucial to choose the appropriate model for the specific task at hand. Smaller models like Davinci are well-suited for tasks like classification or key value extraction, offering enhanced accuracy and speed compared to larger models. On the other hand, large models are more suitable for complex use cases like summarization, reasoning and chat conversations. Selecting the right model tailored to the task optimizes both efficiency and performance.
Leverage Azure-hosted LLM Models
Azure AI Studio provides customers with cutting-edge language models like OpenAI’s GPT-4, GPT-3, Codex, DALL-E, and Whisper models, and other open-source models all backed by the security, scalability, and enterprise assurances of Microsoft Azure. The OpenAI models are co-developed by Azure OpenAI and OpenAI, ensuring seamless compatibility and a smooth transition between the different models.
By opting for Azure OpenAI, customers not only benefit from the security features inherent to Microsoft Azure but also run on the same models employed by OpenAI. This service offers additional advantages such as private networking, regional availability, scalability, and responsible AI content filtering, enhancing the overall experience and reliability of language AI applications.
If anyone is using GenAI models from creators, the transition to Azure-hosted version of these Models has yielded notable enhancements in the response time of the models. This shift to Azure infrastructure has led to improved efficiency and performance, resulting in more responsive and timely outputs from the models.
Rate Limiting, Batching, Parallelize API calls
Large language models are subject to rate limits, such as RPM (requests per minute) and TPM (tokens per minute), which depend on the chosen model and platform. It is important to recognize that rate limiting can introduce latency into the application. To accommodate high traffic requirements, it is recommended to select the maximum value for the max_token parameter to prevent any occurrence of a 429 error, which can lead to subsequent latency issues. Additionally, it is advisable to implement retry logic in your application to further enhance its resilience.
Effectively managing the balance between RPM and TPM allows for enhanced latency through strategies like batching or parallelizing API calls.
When you find yourself reaching the upper limit of RPM but remain comfortably within TPM bounds, consolidating multiple requests into a single batch can optimize your response times. This batching approach enables more efficient utilization of the model’s token capacity without violating rate limits.
Moreover, if your application involves multiple calls to the LLMs API, you can achieve a notable speed boost by adopting an asynchronous programming approach that allows requests to be made in parallel. This concurrent execution minimizes idle time, enhancing overall responsiveness and making the most of available resources.
If the parameters are already optimized and the application requires additional support for higher traffic and a more scalable approach, consider implementing a load balancing solution through Azure API Management layer.
Stream output and use stop sequence
Every LLM endpoint has a particular throughput capacity. As discussed, earlier GPT-3.5-turbo instance on Azure comes with a rate limit of 120,000 tokens per minute, equivalent to approximately 2 tokens per milliseconds. So, to get an output paragraph with 2000 tokens it takes 1 second and the time taken to get the output response increases as the number of tokens increase. The time taken for the output response (Latency) can be measured as the sum of time taken for the first token generation and the time taken per token from the first token onwards. That is
Latency = (Time to first token + (Time taken per token * Total tokens))
So, to improve latency we can stream the output as every token gets generated instead of waiting for the entire paragraph to finish. Both the completions and chat Azure OpenAI APIs support a stream parameter, which when set to true, streams the response back from the model via Server Sevent Events (SSE). We can use Azure Functions with FastAPI to stream the output of OpenAI models in Azure as shown in the blog here.
Designing an Efficient Workflow orchestration
Incorporating GenAI solutions into applications requires the utilization of specialized frameworks like LangChain or Semantic Kernel. These frameworks play a crucial role in orchestrating multiple Large Language Model (LLM) based tasks and grounding these models on custom datasets. However, it’s essential to address the latency introduced by these frameworks in the overall application response. To minimize the impact on application latency, a strategic approach is imperative. A highly effective strategy involves optimizing LLM usage through workflow consolidation, either by minimizing the frequency of calls to LLM APIs or simplifying the overall workflow steps. By streamlining the process, you not only enhance the overall efficiency but also ensure a smoother user experience.
For example, when the requirement is to identify the intention of the user query and based on its context get response by grounding on data from multiple sources. Most times such requirements are executed as a 3-step process –
the first step is to identify the intent using a LLM
and the next step is to get the prompt content from the knowledge base relevant to the intent
and then with the prompt content get the output derived from the LLMs.
One simple approach could be to leverage data engineering and building a consolidated knowledge base with data from all sources and using the input user text directly as the prompt to the grounded data in knowledge base to get the final LLM response in almost a single step.
Improving Latency in Ancillary AI Services
The supporting AI services like Vector DB, Azure AI Search, data pipelines, and others that complement a Language Model (LLM)-based application within the overall Retrieval-Augmented Generation (RAG) pattern are often referred to as “ancillary AI services.” These services play a crucial role in enhancing different aspects of the application, such as data ingestion, searching, and processing, to create a comprehensive and efficient AI ecosystem. For instance, in scenarios where data ingestion plays a substantial role, optimizing the ingestion process becomes paramount to minimize latency in the application.
Similarly lets look at the improvement of few other such services –
Azure AI search
Here are some tips for better performance in Azure AI Search:
Index size and schema: Queries run faster on smaller indexes. One best practice is to periodically revisit index composition, both schema and documents, to look for content reduction opportunities. Schema complexity can also adversely affect indexing and query performance. Excessive field attribution builds in limitations and processing requirements.
Query design: Query composition and complexity are one of the most important factors for performance, and query optimization can drastically improve performance.
Service capacity: A service is overburdened when queries take too long or when the service starts dropping requests. To avoid this, you can increase capacity by adding replicas or upgrading the service tier.
For more information on the optimizations of Azure AI Search index please refer here .
For optimizing third-party vector databases, consider exploring techniques such as vector indexing, Approximate Nearest Neighbor (ANN) search (instead of KNN), optimizing data distribution, implementing parallel processing, and incorporating load balancing strategies. These approaches enhance scalability and improve overall performance significantly.
Conclusion
In conclusion, these strategies contribute significantly to mitigating latency and enhancing response in large language models. However, given the inherent complexity of these models, the optimal response time can fluctuate between milliseconds and 3-4 seconds. It is crucial to recognize that comparing the response expectations of large language models to those of traditional applications, which typically operate in milliseconds, may not be entirely equitable.
Microsoft Tech Community – Latest Blogs –Read More
New Teams for US Government (GCC) Webinars
Discover the latest updates and best practices for a seamless transition to the new Microsoft Teams. Join us for an informative webinar with Teams Engineering, designed to provide you with firsthand knowledge. Ensure a successful migration as we approach the upcoming deadlines:
3.31.2024 (for Desktop, Web)
6.30.2024 (for VDI)
Session Agenda:
New Teams App Considerations for GCC (Mac/Web/Windows/VDI)
Migration Approaches
Known Limitations
Recent Service Updates
Q&A – extended for an additional 30 minutes (attending during this extra time is optional)
Reserve your spot today by registering:
Option #1
When: February 07, 2024 9:00 AM-10:30 AM EST
Duration: 90 minutes
Registration URL: New Teams in GCC Registration Page (eventbuilder.com)
Option #2
When: February 12, 2024 2:30 PM-4:00 PM PST
Duration: 90 minutes
Registration URL: New Teams in GCC (West Coast friendly) Registration Page (eventbuilder.com)
Don’t miss this opportunity to stay informed and make the transition to the new Teams smooth. Reserve your spot now by registering for the webinar. We look forward to your participation and addressing any questions you have, ensuring a successful migration process.
Microsoft Tech Community – Latest Blogs –Read More
Support tip: Improving the efficiency of dynamic group processing with Microsoft Entra ID and Intune
By: Chris Kunze – Sr. Product Manager | Microsoft Intune
If you’re managing a lot of devices, you know how important it is to keep your Microsoft Entra ID dynamic group processing running smoothly and efficiently. To encourage performant dynamic group rules, the ‘contains’ and ‘not Contains’ operators were recently removed (MC705357) from the rule builder’s list of operators. While it’s still possible to use these operators if you edit the rule syntax manually, there is a reason why these operators were removed. Certain properties and operators, such as ‘contains’ and ‘match’, are significantly less efficient in group processing than others. This inefficiency can lead to significant delays in dynamic group processing. You can optimize these rules by using more performant alternatives such as ‘Equals’, ‘Not Equals’, ‘Starts With’, and ‘Not Starts With’.
In addition, some device properties that are available in the creation of a dynamic group and not indexed which also leads to inefficiencies in the processing of the group membership. It’s best to avoid using these properties until they are indexed, if possible. The deviceOwnership and enrollmentProfileName properties have recently been indexed and work is ongoing to index the following properties to improve dynamic group processing efficiency:
deviceCategory
deviceManagementAppId
deviceManufacturer
deviceModel
deviceOSType
deviceOSVersion
devicePhysicalIds
deviceTrustType
isRooted
managementType
objectId
profileType
systemLabels
Using this guidance, we saw significant improvement in group membership evaluation times in a large customer’s production environment.
Here’s a quick example. An organization wants to group all devices that were enrolled with any of these 3 enrollment profiles:
iOS devices – Teachers
iOS devices – Students
iOS devices – Admins
While “device.enrollmentProfileName -contains “iOS devices” works, the rule “device.enrollmentProfileName -startswith “iOS devices” yields the same results but is a much more efficient query.
Evaluating your dynamic group rules with PowerShell
The following is a sample script that you can use to output the displayName, id, and membershipRule for each of the dynamic groups in your organization to a CSV-based file. Using this output, you can quickly list and evaluate the membership rules for all of your Entra ID dynamic groups for inefficiencies and start improving them.
$csvPath = “C:temp”
$csvFile = “dynGroups.csv”
if (!(Get-InstalledModule Microsoft.Graph -ErrorAction SilentlyContinue)) {
Write-Host “You need to install the Microsft.Graph module to run this script.” -ForegroundColor Red
Write-Host “Run ‘Install-Module Microsoft.Graph -Scope CurrentUser’ as an administrator” -ForegroundColor Red
exit 1
}
if (!(Get-MgContext -ErrorAction SilentlyContinue)) {
Connect-MgGraph -Scopes “Directory.Read.All,Group.Read.All”
}
$results = Invoke-MgGraphRequest -Method GET -Uri “https://graph.microsoft.com/v1.0/groups?`$filter=groupTypes/any(c:c+eq+’dynamicMembership’)”
$dynamicGroups = ($results).value
do {
if ($results.’@odata.nextlink’) {
$results = Invoke-MgGraphRequest -Method GET -Uri $results.’@odata.nextlink’
$dynamicGroups += ($results).value
}
} while (
$results.’@odata.nextlink’
)
$dynamicGroups | Select-Object displayName,id,membershipRule | Export-Csv -Path $csvPath$csvFile
Conclusion
We recommend evaluating your group membership rules to see how you can write them more efficiently. Use ‘Equals’ and ‘Starts With’ wherever possible and avoid using the non-indexed properties listed above if they don’t materially change the membership of the dynamic group. You can learn more about creating efficient rules by reading this documentation: Create simpler, more efficient rules for dynamic groups in Microsoft Entra ID.
We hope this helps to improve the processing of your dynamic group memberships! If you have any questions, leave a comment below or reach out to us on X @IntuneSuppTeam.
Microsoft Tech Community – Latest Blogs –Read More
New partner-provided calling solutions are available for Microsoft Teams users in India
Each month, over 17 million users worldwide rely on Teams Phone for smart, seamless, and secure communication and collaboration. As we add to these capabilities, we’re focused on providing flexibility and choice, expanding the availability of Teams Phone to new markets.
Microsoft has partnered with local operators to give Teams customers with operations in India additional choice to enable calling capabilities in Teams. We are excited to share that new Teams Phone-powered solutions from Airtel, Tata Communications Limited, and Tata Tele Business Services are now generally available. Leveraging the Operator Connect platform, these three operators have developed and will sell and support full-featured telephony solutions for Teams users in India, in compliance with local market regulations.
Increased choice and flexibility
These new partner offerings provide Teams customers in India:
Easy and fast provisioning: Connect to your operator’s PSTN services in minutes and enable cloud-calling capabilities for your teams, without the need for any additional hardware or software.
Solutions that are simple to manage: Experience clear calling and enhanced reliability with proactive monitoring and automated system optimization along with the efficiency of a single admin console.
Local support and billing: Work with your local operator for support and billing, and leverage their expertise and experience in the Indian market.
Full-featured calling: Access all the voice features that Teams Phone offers, such as call queues, auto attendants, voicemail, call park, call transfer, and .
Compliance with local regulations: These partner-provided solutions enable you to deploy calling capabilities in Teams in compliance with local regulations in India.
What are the differences between Operator Connect and the current Direct Routing solution in India?
The current Direct Routing solution Microsoft offers in India is a hybrid telephony solution that requires you to deploy and manage Session Border Controllers (SBCs) either on-premises or in the cloud to connect to the public switched telephone network (PSTN). With Direct Routing, you can make and receive calls from Teams when connected to your corporate network.
The new cloud-based partner solutions do not require you to deploy or manage any SBCs, as the operators handle the Direct Routing connection for you. These solutions also give you access to an expanded set of Teams Phone capabilities, while simplifying the process of enabling calling for users in India, and allowing you to benefit from the technology and market experience these partners bring.
Get started with Teams Phone-powered solutions in India
Take these steps to enable your users in India with a Teams Phone-powered calling solution from one of these partners:
1. Verify you have a qualifying Microsoft Teams license
2. Contact your preferred operator to for their respective solution to enable Teams Phone capabilities
Airtel
Tata Communications Limited
Tata Tele Business Services
3. Assign licenses and phone numbers to your users and configure your voice policies and call routing settings as needed through the Teams admin center
4. Start making and receiving calls using your chosen operator’s service through Microsoft Teams
Explore the following resources to learn more about Operator Connect and Teams Phone:
Operator Connect directory
Plan Operator Connect for India
Setting up Teams Phone
Microsoft Tech Community – Latest Blogs –Read More
Import-Module for advanced functions doesn’t work as expected
Using [CmdletBinding(SupportShouldProcess)] on functions using them inside the main script does work as expected: When passing -WhatIf or -Confirm:$true to the script then those parameters are passed to the advanced function too.
However when moving those function into a module (Example: Tools.psm1) the function is imported and can be called but those parameters are NOT passed to that function.
Any ideas are welcome.
Kind Regards,
Thomas
Using [CmdletBinding(SupportShouldProcess)] on functions using them inside the main script does work as expected: When passing -WhatIf or -Confirm:$true to the script then those parameters are passed to the advanced function too. However when moving those function into a module (Example: Tools.psm1) the function is imported and can be called but those parameters are NOT passed to that function. Any ideas are welcome. Kind Regards,Thomas Read More
msixbundle – powershell installation as unprivileged user
Is it correct that an unprivileged user can update an msixbundle like the desktop app installer without the administrators permission or would the new installer then only be applied to their own profile?
Is it correct that an unprivileged user can update an msixbundle like the desktop app installer without the administrators permission or would the new installer then only be applied to their own profile? Read More
SDN: Migrating from Rest Name to Static IP | Remove worries of DNS!
[Special Thanks to Adam Rudell, our Sr. Support Escalation Engineer, for putting together an excellent video and tutorial]
Hello SDN Community!
Today, when you deploy Software Defined Networking via SDNExpress or Windows Admin Center (WAC), you must provide a REST DNS name. This is often referred to as a Dynamic DNS deployment. The northbound API endpoint (REST DNS name) is used for all of our clients and management experience. For example, if the replica moves from one Network Controller (NC) VM to another, the API service will perform a DNS registration to update the northbound API record. In some instances, additional security hardening or third-party integration could cause issues with the DNS registration.
Introducing Static IP configuration for Network Controller REST!
With a Static IP configuration, when a replica moves to another NC VM, the API service simply programs a secondary IP with no DNS registration call. This provides fault tolerance and relaxes the DNS registration requirements of SDN.
Static IP can be used with a new SDN deployment OR existing SDN deployments! Check out the video that Adam Rudell, our Sr. Support Escalation Engineer, for an in-depth walk through of configuring this on a new SDN deployment, an existing SDN deployment, and then even moving back to a Dynamic DNS configuration! Also, ICYMI, we launched a new YouTube channel focused entirely on Microsoft SDN across our Edge portfolio – be sure to Like, Subscribe, and Follow as we continue to post more content!
Microsoft Tech Community – Latest Blogs –Read More
New Microsoft Teams bulk installer is now available for Windows
We are happy to share that the new Microsoft Teams bulk installer is now available for Windows.
We shared the news of the general availability of new Microsoft Teams in this blog post, and we have also made available tools that help admins to install the new Teams app. More details can be found in Bulk deploy the new Microsoft Teams desktop client.
Online deployment: Download and install the latest new Teams app machine wide:
Command (Run with admin privilege): teamsbootstrapper.exe -p
During online deployment, the bootstrapper app detects the CPU architecture of the system and downloads the corresponding installer of most recently released new Teams client and installs the client machine wide.
Offline deployment: Install pre-downloaded new Teams client MSIX package machine wide: Download Microsoft Teams Desktop and Mobile Apps
For admins concerned with network bandwidth usage of online deployment, offline deployment mode is a great alternative. Admins can download the client only once and use the bootstrapper to bulk deploy machines in their tenant.
Command for local path (Run with admin privilege): teamsbootstrapper.exe -p -o “c:pathtoteams.msix”
Command for UNC path (Run with admin privilege): teamsbootstrapper.exe -p -o “uncpathtoteams.msix”
During offline deployment, the bootstrapper app installs the admin specified package from either local system or UNC path. Please make sure the correct version of new Teams client is downloaded.
Bulk remove new Teams:
Command for deleting every occurrence of new Teams installation: teamsbootstrapper.exe -x
If you choose the bulk removal option, it will uninstall both the machine level and the user level installations. New Teams app instances that are running will be stopped.
We advise admins to use the bulk installer tool to install new Teams client for their tenants.
There are separate new Teams installer files depending on the target system’s CPU architecture: X64/X86/AMR64. The bootstrapper automatically detects the system architecture and downloads the appropriate installer file to avoid performance.
Online mode automatically downloads the most recent released version of the new Teams app. This prevents the problem of outdated versions of the app being installed over and over, which can increase network usage (outdated app versions will update to the newest release right after installation), and slow down essential feature or security updates.
The bootstrapper can be deployed by admins using the deployment tools they already have for example intune/sccm.
Upcoming features and bug fixes:
Auto start support – Create a new command line option that launches the new Teams app for all users on the machine after provisioning.
Microsoft Tech Community – Latest Blogs –Read More
Save money with Arc SQL Server licensing – what you need to know | Data Exposed
If you are used to traditional licensing options and pay for software assurance, you may wonder why you would ever use pay-as-you-go billing. In this episode of Data Exposed with Anna Hoffman and Sasha Nosov, we’ll cover how to use our new PayG model and understand how you can use Extended Security Updates.
Resources:
SQL Server enabled by Azure Arc – SQL Server | Microsoft Learn
View/share our latest episodes on Microsoft Learn and YouTube!
Microsoft Tech Community – Latest Blogs –Read More
Partner Blog | Reskill: Bridging the talent gap in ERP
Our guest contributor for today’s blog is Kurt Juvyns, Principal Product Manager, Dynamics 365 Business Central.
In the rapidly evolving enterprise resource planning landscape, partners across the globe face a common challenge: talent scarcity.
According to a recent Microsoft survey, a staggering 77% of Business Central partners currently find themselves hindered in achieving business objectives due to a lack of skilled professionals. To address this shortage of talent, we collaborated with leading workforce development partners to develop Reskill — an initiative aimed at bridging this talent gap.
This comprehensive engagement, part of the Microsoft Learn Career Connected program, is designed to attract a diverse range of individuals with specialized skills from outside the “traditional” Business Central ecosystem. Through an in-depth training and coaching regimen, Reskill transforms new hires into adept Business Central consultants and developers, equipping them to navigate the complexities of today’s digital landscape.
Continue reading here
Microsoft Tech Community – Latest Blogs –Read More
Azure Virtual Desktop for Azure Stack HCI now available!
Azure Virtual Desktop for Azure Stack HCI—now in general availability—extends the capabilities of the Microsoft Cloud to your datacenters.
IT pros face a complex and challenging environment as they help their organizations move to the cloud, especially when the cloud isn’t the best option for every workload. Managing hybrid cloud migrations while meeting the needs of today’s distributed workforce takes a comprehensive approach that balances performance and accessibility with security and control. For organizations that need desktop virtualization for applications that must remain on-premises for performance, data locality, or regulatory reasons, Azure Virtual Desktop for Azure Stack HCI may be the right solution.
Organizations like Commvault, a global leader in data management, also echo these benefits:
At Commvault, we have a unique environment where thousands of employees need hyper-local acces to data with the lowest latency possible. Azure Virtual Desktop for Azure Stack HCI checked all the boxes and met our needs with its ease of deployment and management, network and storage performance, and its security integrations allowing for governed access policies.
– Ernie Costa, Business Cloud Operations Manager, Commvault
Azure Virtual Desktop and Azure Stack HCI each deliver their own value, and with Azure Virtual Desktop for Azure Stack HCI, organizations can experience both. Read on to learn more about each solution and how Azure Virtual Desktop for Azure Stack HCI brings both together.
What is Azure Virtual Desktop?
Azure Virtual Desktop is a cloud VDI solution designed to meet the challenges of remote and hybrid work. It enables a secure, remote desktop experience from anywhere, providing employees the best virtualized experience with the only solution fully optimized for Windows 11 and Windows 10 multi-session capabilities. It has built-in security to help keep your organization’s applications and data secure and compliant. Azure Virtual Desktop can simplify deployment and management of your infrastructure, gives you full control over configuration and management, and reduces costs by optimizing through existing virtualization investments and skills, as well as consumption-based pricing where you only pay for what you use.
What is Azure Stack HCI?
Azure Stack HCI is a Microsoft infrastructure solution powered by hyperconverged infrastructure (HCI) that hosts Windows and Linux virtual machines (VMs) or containerized workloads and their storage. It’s a hybrid product by design that connects on-premises systems to Azure for cloud-based services, monitoring, and management. Azure Stack HCI gives organizations the agility and cost-effectiveness of public cloud infrastructure while meeting the use case and regulatory requirements for specialized workloads that can’t live in the public cloud.
The new feature release of Azure Stack HCI is now in general availability. It brings cloud-based cluster deployment and management together with Azure Arc infrastructure, providing centralized management of workloads. Learn more about Azure Stack HCI.
What is Azure Virtual Desktop for Azure Stack HCI?
Bringing the benefits of Azure Virtual Desktop and Azure Stack HCI together, Azure Virtual Desktop for Azure Stack HCI lets organizations securely run virtualized desktops and apps on-premises at the edge or in their datacenter. For organizations with data residency requirements, latency-sensitive workloads, or those with data proximity requirements. Azure Virtual Desktop for Azure Stack HCI extends the capabilities of the Microsoft Cloud to your datacenters.
With Azure Virtual Desktop for Azure Stack HCI, you can:
Improve performance for Azure Virtual Desktop users in areas with poor connectivity to the Azure public cloud by giving them session hosts closer to their location.
Meet data locality requirements by keeping app and user data on-premises. For more information, see Data locations for Azure Virtual Desktop.
Improve access to legacy on-premises apps and data sources by keeping desktops and apps in the same location.
Deliver the full Windows experience while retaining cost efficiency with Windows 11 and Windows 10 Enterprise multi-session.
Unify your VDI deployment and management compared to traditional on-premises VDI solutions by using the Azure portal.
Achieve the best performance by using RDP Shortpath for low-latency user access.
Deploy the latest fully patched images quickly and easily using Azure Marketplace images.
How to get started
Azure Virtual Desktop for Azure Stack HCI is generally available with the new feature release of Azure Stack HCI. After you deploy an Azure Stack HCI cluster, it will be available as a resource location for Azure Virtual Desktop host pools.
To run Azure Virtual Desktop for Azure Stack HCI, you first need to make sure you’re licensed correctly and understand the pricing model. There are three components that affect how much it costs to run Azure Virtual Desktop for Azure Stack HCI:
User access rights. The same licenses that grant access to Azure Virtual Desktop on Azure also apply to Azure Virtual Desktop for Azure Stack HCI. Learn more at Azure Virtual Desktop pricing. Note, per-user access pricing for external users is not supported on Azure Virtual Desktop for Azure Stack HCI.
Infrastructure costs. Learn more at Azure Stack HCI pricing.
Hybrid service fee. This fee requires you to pay for each active virtual CPU (vCPU) for your Azure Virtual Desktop session hosts running on Azure Stack HCI. This fee is active once the preview period ends.
Want to learn more?
Watch the Microsoft Mechanics episode to see a demo of Azure Virtual Desktop for Azure Stack HCI.
Read the Azure Stack HCI blog to learn more about its general availability details.
Your feedback makes our features better
For customers who have already experienced Azure Virtual Desktop for Azure Stack HCI in our preview phase, your feedback has been invaluable, and we thank you. We continue to integrate your requests and ideas into future releases, and your feedback helps our development teams prioritize new features, address missed opportunities, and ensure the product better meets your needs.
Microsoft Tech Community – Latest Blogs –Read More
How Microsoft 365 Delivers Trustworthy AI Blog Post
How Microsoft 365 Delivers Trustworthy AI Whitepaper
In the rapidly evolving business landscape, corporations are perpetually in search of innovative strategies that can amplify productivity and bolster security. Microsoft President Brad Smith wrote in his blog: AI advancements are revolutionizing knowledge work, enhancing our cognitive abilities, and are fundamental to many aspects of life. These developments present immense opportunities to improve the world by boosting productivity, fostering economic growth, and reducing monotony in jobs. They also enable creativity, impactful living, and discovery of insights in large data sets, driving progress in various fields like medicine, science, business, and security. However, the integration of AI into business operations is not without its hurdles. Companies are tasked with ensuring that their AI solutions are not only robust but also ethical, dependable, and trustworthy.
How Microsoft 365 Delivers Trustworthy AI is a comprehensive document providing regulators, IT pros, risk officers, compliance professionals, security architects, and other interested parties with an overview of the many ways in which Microsoft mitigates risk within the artificial intelligence product lifecycle. The document outlines the Microsoft promise of responsible AI, the responsible AI standard, industry leading frameworks, laws and regulations, methods of mitigating risk, and other assurance-providing resources. It is intended for a wide range of audiences external to Microsoft, who are interested in or involved in the development, deployment, or use of Microsoft AI. As Charlie Bell, EVP of Security at Microsoft describes in his blog, “As we watch the progress enabled by AI accelerate quickly, Microsoft is committed to investing in tools, research, and industry cooperation as we work to build safe, sustainable, responsible AI for all.”
The commitments and standards conveyed in this paper operate at the Microsoft cloud level – these promises and processes apply to AI activity across Microsoft. Where the paper becomes product specific, its sole focus is Microsoft Copilot for Microsoft 365. This does not include Microsoft Copilot for Sales, Microsoft Copilot for Service, Microsoft Copilot for Finance, Microsoft Copilot for Azure, Microsoft Copilot for Microsoft Security, Microsoft Copilot for Dynamics 365, or other Copilots outside of Microsoft 365.
At Microsoft, we comprehend the significance of trustworthy AI. We have formulated a comprehensive strategy for responsible and secure AI that zeroes in on addressing specific business challenges such as safeguarding data privacy, mitigating algorithmic bias, and maintaining transparency. This whitepaper addresses our strategy for mitigating AI risk as part of the Microsoft component of the AI Shared Responsibility Model.
The document is divided into macro sections with relevant articles within each:
Responsible and Secure AI at Microsoft – this section focuses on Microsoft’s commitment to responsible AI and what this looks like in practice. The articles within address key topics including:
The Office of the Responsible AI – read this to gain a deeper understanding of what comprises this division within Microsoft.
The Responsible AI Standard and Impact Assessment – every Microsoft AI project must adhere to the Responsible AI Standard and have a valid impact assessment completed.
Microsoft’s voluntary White House commitments – learn more about the commitments the White House made and how Microsoft shares these principles in our development and deployment practices.
Artificial Generative Intelligence Security team – learn about Microsoft’s center of excellence for Microsoft’s generative AI security and the initiatives being driven by this team.
Addressing New Risk – this section centers on the ways in which Microsoft is continuously improving its security practices and service design to mitigate new risk brought forth by the era of AI. As Brad Smith states in his blog, “Even as recent years have brought enormous improvements, we will need new and different steps to close the remaining cybersecurity gap.” This section addresses many actions Microsoft takes to address novel and preexisting risks in the era of AI. The articles within address salient topics including:
The copilot copyright commitment – how Microsoft addresses the risk of customers inadvertently using copywritten material via Microsoft AI services.
Updating the Security Development Lifecycle (SDL) to address AI risk – the ways Microsoft has adapted our SDL to identify and prioritize AI specific risks.
Copilot tenant boundaries and data protection with shared binary LLMs – this article describes how your data remains protected and secured throughout the data flow process to the copilot LLMs and back to your end user in this multi-tenant environment.
Copilot data storage and processing – this section answers the question, “what are the data storage and processing commitments applicable to Microsoft 365 copilot today?”
AI specific regulations and frameworks for assurance – this section describes upcoming regulations relevant to artificial intelligence and how Microsoft plans to address each. Regulations and frameworks addressed include:
European Union AI Act
ISO 42001 AI Management System
Cyber Executive Order (EO 14028)
NIST AI Risk Management Framework
Assurance Providing Resources – this comprises miscellaneous resources to providing customers assurance that Microsoft is mitigating risk as part of the shared responsibility model.
Defense-in-depth: controls preventing model compromise in the production environment – this article outlines an entire Microsoft control set designed to mitigate model compromise through defense-in-depth.
As with everything Microsoft does, this whitepaper is subject to continuous update and improvement. Please reach out to your Microsoft contacts if you have questions regarding this content; thank you for your continued support and utilization of Microsoft AI.
Download the Whitepaper
We hope this whitepaper has provided you with valuable insights into how Microsoft delivers trustworthy AI across its products and services. If you want to learn more about our responsible and secure AI strategy, you can download the full whitepaper here: https://aka.ms/TrustworthyAI. This document will give you a comprehensive overview of the Microsoft promise of responsible AI, the responsible AI standard, industry leading frameworks, laws and regulations, methods of mitigating risk, and other assurance-providing resources. You will also find detailed information on how Microsoft Copilot for Microsoft 365 adheres to these principles and practices. Download the whitepaper today and discover how Microsoft can help you achieve your AI goals with confidence and trust.
Microsoft Tech Community – Latest Blogs –Read More
Master Generative AI with Azure OpenAI Service: A Comprehensive Guide for Students
Microsoft Tech Community – Latest Blogs –Read More