Category: Microsoft
Category Archives: Microsoft
March 2024 Viva Glint newsletter
Welcome to the March edition of our Viva Glint newsletter. Our recurring communications will help you get the most out of the Viva Glint product. You can always access the current edition and past editions of the newsletter on our Viva Glint blog.
Our next features release date
Viva Glint’s next feature release is scheduled for March 9, 2024*. Your dashboard will provide date and timing details two or three days before the release.
In your Viva Glint programs
The Microsoft Copilot Impact Survey template has premiered in the Viva Glint platform. AI tools are increasingly integrated into the workplace to enhance workforce productivity and the employee experience. This transformational shift in work means leaders need to understand their early investments in Microsoft Copilot and how it is being adopted. Deploying the Copilot Impact Survey template in Viva Glint, organizations can measure the impact of Microsoft Copilot enabling leaders to plan AI readiness, drive adoption, and measure their ROI. Learn about the Copilot Impact survey here.
Changing item IDs for expired cycles will be self-serve. Comparing survey trend is essential to tracking focus area progress over time. When a survey is retired, you can still use the data for an item from that survey as a comparison in a new survey which uses the identical item. And you can do it quickly and independently! Learn how to change survey item IDs here.
We’ve updated our Action Plan templates! Action Plan templates provide resources to help organizations act on feedback. Content comes from our new learning modules, WorkLab articles, and LinkedIn Learning. Now we’re exploring opportunities across all Viva and Copilot products to harness sentiment and data to enhance the employee experience and surface relevant, contextualized action recommendations. Check out Action Plan guidance here.
Support survey takers with new help content
Simplify your support process during live Viva Glint surveys to help users easily submit their valuable feedback. Use support guidance as an admin to communicate proactively and create resources to address commonly asked questions by survey takers. Share help content directly with your organization so that survey takers have answers to all their questions.
Announcing our new Viva Glint product council
Viva Glint is launching a product council! We are keen to listen to you, our customers, to help inform the future of our product. By enrolling, you will hear directly from our product and design teams, have an impact in shaping our product, and connect with like-minded customers to discuss your Viva Glint journey. To learn more and express an interest in signing up, visit this blog post.
Connect and learn with Viva Glint
We are officially launching our badging program! We are excited to announce that Viva Glint users can now earn badges upon completion of recommended training modules and then publish them to their social media networks. We’re kicking off this program by offering both a Foundations Admin badge and a Manager badge course. Learn more here about badging.
Get ready for our next Viva Glint: Ask the Experts session on March 12. Geared towards new Viva Glint customers who are in the process of deploying their first programs, this session focuses on User Roles and Permissions. You must be registered to attend the session. Bring your questions! Register here for Ask the Experts.
Join us at our upcoming Microsoft and Viva hosted events
Attend our Think like a People Scientist webinar series. Premiering in February (if you missed it, you can catch the recording here!), this series, created based on customer feedback, will deep dive into important topics that you may encounter on your Viva Glint journey. Register for our upcoming sessions below:
March 20: Telling a compelling story with your data
April 23: Influencing action without authority
May 28: Designing a survey that meets your organization’s needs
We are also kicking off our People Science x AI Empowerment series. Check out and register for our upcoming events that will help empower HR leaders with the knowledge and resources to feel confident, excited, and ready to bring AI to their organizations:
March 14: AI overview and developments for Viva Glint featuring Viva Glint People Science and Product leaders
April 18: AI: the game-changer for the employee experience featuring Microsoft research and applied science leaders
For those in the Vancouver area, join us for Microsoft Discovery Day on March 6. During this in-person event at Microsoft Vancouver you will learn from Microsoft leaders and industry experts about fundamental shifts in the workplace and the implications for your business. Gain an understanding of the value of AI-powered insights and experiences to build engagement and inspire creativity. Register.
Join the Viva People Science team at upcoming industry events
Are you attending the Wharton People Analytics Conference on March 14-15? As sponsors of the event, we will be there, and we would love to see you at our booth! This conference explores the latest advances and urgent questions in people analytics, including AI and human teaming, neurodiversity, new research on hybrid and remote work, and the advancement of frontline workers. Learn more about the conference here.
Our Microsoft Viva People Scientists are among the featured speakers at the Society for Industrial and Organizational Psychology (SIOP) annual conference in April. Live in Chicago, and also available virtually, the SIOP conference inspires and galvanizes our community through sharing knowledge, building connections, fostering inclusion, and stimulating new ideas. Learn more here.
Join Rick Pollak on April 18 for a panel discussion, Your Employee Survey is Done. Now What? Rick and leading experts will address best practices and advice about survey reporting, action taking, and more.
Join Caribay Garcia and other industrial organizational psychology innovators on April 19 for IGNITE-ing Innovation: Uses of Generative AI in Industrial Organization Psychology. This session will help psychologists conduct timelier research by fostering cross-collaborative communication between academics and practitioners.
Join Stephanie Downey and other industry experts on April 19 for Ask the Experts: Crowdsource Solutions to Your Top Talent Challenges. This session brings together industry experts to facilitate roundtable discussions focused on key talent and HR challenges.
Again, join Stephanie Downey on April 19 for Alliance: Unlocking Whole Person Management: Benefits, Hidden Costs, and Solutions. Explore the multifaceted dimensions of whole person management (WPM) by delving into the benefits and challenges this approach creates.
Join Carolyn Kalafut on April 19 for Path to Product. This seminar provides an intro to understanding product and the ability to influence the software development lifecycle and to embed responsible and robust I-O principles in it.
Join Caribay Garcia on April 20 for Harnessing Large Language Models in I-O Psychology: A Revolution in HR Offerings. Delve into the practical implications, ethical concerns, and the future of large language models (LLMs) in HR.
Check out our most recent blog content on the Microsoft Viva Community
Assess how your organization feels about Microsoft Copilot
Viva People Science Industry Trends: Retail
How are we doing?
If you have any feedback on this newsletter, please reply to this email. Also, if there are people on your teams that should be receiving this update, please have them sign up using this link.
*Viva Glint is committed to consistently improving the customer experience. The cloud-based platform maintains an agile production cycle with fixes, enhancements, and new features. Planned program release dates are provided with the best intentions of releasing on these dates, but dates may change due to unforeseen circumstances. Schedule updates will be provided as appropriate.
Microsoft Tech Community – Latest Blogs –Read More
Microsoft and open-source software
Microsoft has embraced open-source software—from offering tools for coding and managing open-source projects to making some of its own technologies open source, such as .NET and TypeScript. Even Visual Studio Code is built on open source. For March, we’re celebrating this culture of open-source software at Microsoft.
Explore some of the open-source projects at Microsoft, such as .NET on GitHub. Learn about tools and best practices to help you start contributing to open-source projects. And check out resources to help you work more productively with open-source tools, like Python in Visual Studio Code.
.NET is open source
Did you know .NET is open source? .NET is open source and cross-platform, and it’s maintained by Microsoft and the .NET community. Check it out on GitHub.
Python Data Science Day 2024: Unleashing the Power of Python in Data Analysis
Celebrate Pi Day (3.14) with a journey into data science with Python. Set for March 14, Python Data Science Day is an online event for developers, data scientists, students, and researchers who want to explore modern solutions for data pipelines and complex queries.
C# Dev Kit for Visual Studio Code
Learn how to use the C# Dev Kit for Visual Studio Code. Get details and download the C# Dev Kit from the Visual Studio Marketplace.
Visual Studio Code: C# and .NET development for beginners
Have questions about Visual Studio Code and C# Dev Kit? Watch the C# and .NET Development in VS Code for Beginners series and start writing C# applications in VS Code.
Reactor series: GenAI for software developers
Step into the future of software development with the Reactor series. GenAI for Software Developers explores cutting-edge AI tools and techniques for developers, revolutionizing the way you build and deploy applications. Register today and elevate your coding skills.
Use GitHub Copilot for your Python coding
Discover a better way to code in Python. Check out this free Microsoft Learn module on how GitHub Copilot provides suggestions while you code in Python.
Getting started with the Fluent UI Blazor library
The Fluent UI Blazor library is an open-source set of Blazor components used for building applications that have a Fluent design. Watch this Open at Microsoft episode for an overview and find out how to get started with the Fluent UI Blazor library.
Remote development with Visual Studio Code
Find out how to tap into more powerful hardware and develop on different platforms from your local machine. Check out this Microsoft Learn path to explore tools in VS Code for remote development setups and discover tips for personalizing your own remote dev workflow.
Using GitHub Copilot with JavaScript
Use GitHub Copilot while you work with JavaScript. This Microsoft Learn module will tell you everything you need to know to get started with this AI pair programmer.
Generative AI for Beginners
Want to build your own GenAI application? The free Generative AI for Beginners course on GitHub is the perfect place to start. Work through 18 in-depth lessons and learn everything from setting up your environment to using open-source models available on Hugging Face.
Use OpenAI Assistants API to build your own cooking advisor bot on Teams
Find out how to build an AI assistant right into your app using the new OpenAI Assistants API. Learn about the open playground for experimenting and watch a step-by-step demo for creating a cooking assistant that will suggest recipes based on what’s in your fridge.
What’s new in Teams Toolkit for Visual Studio 17.9
What’s new in Teams Toolkit for Visual Studio? Get an overview of new tools and capabilities for .NET developers building apps for Microsoft Teams.
Embed a custom webpage in Teams
Find out how to share a custom web page, such as a dashboard or portal, inside a Teams app. It’s easier than you might think. This short video shows how to do this using Teams Toolkit for Visual Studio and Blazor.
Get to know GitHub Copilot in VS Code and be more productive
Get to know GitHub Copilot in VS Code and find out how to use it. Watch this video to see how incredibly easy it is to start working with GitHub Copilot…Just start coding and watch the AI go to work.
Customize Dev Containers in VS Code with Dockerfiles and Docker Compose
Dev containers offer a convenient way to deliver consistent and reproducible environments. Follow along with this video demo to customize your dev containers using Dockerfiles and Docker Compose.
Designing for Trust
Learn how to design trustworthy experiences in the world of AI. Watch a demo of an AI prompt injection attack and learn about setting up guardrails to protect the system.
AI Show: LLM Evaluations in Azure AI Studio
Don’t deploy your LLM application without testing it first! Watch the AI Show to see how to use Azure AI Studio to evaluate your app’s performance and ensure it’s ready to go live. Watch now.
What’s winget.pro?
The Windows Package Manager (winget) is a free, open-source package manager. So what is winget.pro? Watch this special edition of the Open at Microsoft show for an overview of winget.pro and to find out how it differs from the well-known winget.
Use Visual Studio for modern development
Want to learn more about using Visual Studio to develop and test apps. Start here. In this free learning path, you’ll dig into key features for debugging, editing, and publishing your apps.
Build your own assistant for Microsoft Teams
Creating your own assistant app is super easy. Learn how in under 3 minutes! Watch a demo using the OpenAI Assistants, Teams AI Library, and the new AI Assistant Bot template in VS Code.
GitHub Copilot fundamentals – Understand the AI pair programmer
Improve developer productivity and foster innovation with GitHub Copilot. Explore the fundamentals of GitHub Copilot in this free training path from Microsoft Learn.
How to get GraphQL endpoints with Data API Builder
The Open at Microsoft show takes a look at using Data API Builder to easily create Graph QL endpoints. See how you can use this no-code solution to quickly enable advanced—and efficient—data interactions.
Microsoft, GitHub, and DX release new research into the business ROI of investing in Developer Experience
Investing in the developer experience has many benefits and improves business outcomes. Dive into our groundbreaking research (with data from more than 2000 developers at companies around the world) to discover what your business can gain with better DevEx.
Build your custom copilot with your data on Teams featuring Azure the AI Dragon
Build your own copilot for Microsoft Teams in minutes. Watch this video to see how in this demo that builds an AI Dragon that will take your team on a cyber role-playing adventure.
Microsoft Graph Toolkit v4.0 is now generally available
Microsoft Graph Toolkit v4.0 is now available. Learn about its new features, bug fixes, and improvements to the developer experience.
Microsoft Mesh: Now available for creating innovative multi-user 3D experiences
Microsoft Mesh is now generally available, providing a immersive 3D experience for the virtual workplace. Get an overview of Microsoft Mesh and find out how to start building your own custom experiences.
Global AI Bootcamp 2024
Global AI Bootcamp is a worldwide annual event that runs throughout the month of March for developers and AI enthusiasts. Learn about AI through workshops, sessions, and discussions. Find an in-person bootcamp event near you.
Microsoft JDConf 2024
Get ready for JDConf 2024—a free virtual event for Java developers. Explore the latest in tooling, architecture, cloud integration, frameworks, and AI. It all happens online March 27-28. Learn more and register now.
Microsoft Tech Community – Latest Blogs –Read More
Leverage anomaly management processes with Microsoft Cost Management
The cloud comes with the promise of significant cost savings compared to on-premises costs. However, realizing those savings requires diligence to proactively plan, govern, and monitor your cloud solutions. Your ability to detect, analyze, and quickly resolve unexpected costs can help minimize the impact on your budget and operations. When you understand your cloud costs you can make more informed decisions on how to allocate and manage those costs. But even with proactive cost management, surprises can still happen. That’s why we developed several tools in Microsoft Cost Management to help you set up thresholds and rules so you can detect problems early and ensure the timely detection of out-of-scope changes in your cloud costs. Let’s take a closer look at some of these tools and how you can use them to discover anomalous costs and usage patterns.
Identify atypical usage patterns with anomaly detection
Anomaly detection is a powerful tool that can help you minimize unexpected charges by identifying atypical usage patterns like cost spikes or dips based on your cost and usage trends and take corrective actions. For example, you might notice that something has changed, but you’re not sure what. Suppose you have a subscription that consumes around $100 every day. A new service was added into the subscription by mistake, resulting in the daily cost doubling to $200. With anomaly detection, you will be notified about the steep spike in daily cost, which you can then investigate to see if it’s an expected increase or a mistake, leading to early corrective measure.
You can also embed time-series anomaly detection capabilities into your apps to identify problems quickly. AI Anomaly Detector ingests time-series data of all types and selects the best anomaly detection algorithm for your data to ensure high accuracy. Detect spikes, dips, deviations from cyclic patterns, and trend changes through both univariate and multivariate APIs. Customize the service to detect any level of anomaly. Deploy the anomaly detection service where you need it—in the cloud or at the intelligent edge.
Use Alerts to get notified when an anomalous usage change is detected
You can subscribe to anomaly alerts to be automatically notified when an anomalous usage change is detected, with a subscription-scope email displaying the underlying resource groups that contributed to the anomalous behavior. Alerts can also be set up for your Azure reserved instances usage to receive email notifications, so you can take remedial action when your reservations have low utilization.
Here’s an example of how to create an anomaly alert rule:
Select the scope as the subscription which needs monitoring.
Navigate to the ‘Cost alerts’ page in Cost Management. Select ‘Anomaly’ as the Alert type.
Specify the recipient email IDs.
Click on ‘Create alert rule.’
In the event that an anomaly is detected, you will receive alert emails which give you basic information to help you start your investigation.
Get deeper insights with smart views
Use smart views in Cost Analysis to view anomaly insights that were automatically detected for each subscription. To drill into the underlying data for something that has changed, select the Insight link. You can also create custom views for anomalous usage detection such as unused costs from Azure reserved instances and savings plans that could point to further optimization for specific workloads.
You can also group related resources in Cost Analysis and smart views. For example, group related resources, like disks under virtual machines or web apps under App Service plans, by adding a “cm-resource-parent” tag to the child resources with a value of the parent resource ID. Or use Charts in Cost Analysis smart views to view your daily or monthly cost over time.
Use Copilot for AI-based assistance
For quick identification and analysis of anomalies in your cloud spend, try the AI-powered Copilot in Cost Management––available in preview on the Azure Portal. For example, if a cost doubles you can ask Copilot natural language questions to understand what happened and get the insights you need faster. You don’t need to be an expert in navigating the cost management UI or analyzing the data, you simply let the AI do it for you. For example, you can ask, “why did my cost increase this month?” or “which service led to the increase in cost this month?” Copilot will then provide a breakdown by categories of spend and their percentage impact on your total invoice. From there, you can leverage the generated suggestions to investigate your bill further.
Learn more about streamlining anomaly management
Optimizing your cloud spend with Azure becomes much easier when you streamline your anomaly management processes with tools like anomaly detection, alerts, and smart views in Microsoft Cost Management. You can learn even more about using FinOps best practices to manage anomalies in your resource usage at aka.ms/finops/solutions.
Microsoft Tech Community – Latest Blogs –Read More
Inclusive and productive Windows 11 experiences for everyone
Today we begin to release new features and enhancements to Windows 11 Enterprise—features that offer a more intuitive and user-friendly experience for both workers and IT admins. Most of these new features will be enabled by default in the March 2024 optional non-security preview release for all editions of Windows 11, versions 23H2 and 22H2. IT admins who want to get the new Windows 11 features can enable optional updates for their managed devices via policy.
New in accessibility
One of the most exciting areas of enhancement involves voice access, a feature in Windows 11 that enables everyone, including people with mobility disabilities, to control their PC and author text using only their voice and without an internet connection. Voice access now supports multiple languages, including French, German, and Spanish. People can create custom voice shortcuts to quickly access frequently used commands. And, voice access now works across multiple displays with number and grid overlays that help people easily switch between screens using only voice commands.
Enhancements to Narrator, the built-in screen reader, are also coming. You’ll be able to preview natural voices before downloading them and utilize a new keyboard command that allows you to more easily move between images on a screen. Narrator’s detection of text in images, including handwriting, has been improved, and it now announces the presence of bookmarks and comments in Microsoft Word.
If you’re interested in learning about Windows 11 accessibility features, please check out the following resources:
Inside Windows 11 accessibility setting and tools
Skilling snack: Accessibility in Windows 11
Skilling snack: Voice access in Windows
Enhanced sharing
Sharing content is now easier with updates to Windows share and Nearby Share. The Windows share window now displays different apps for “Share using” based on the account you use to sign in. Nearby Share has also been improved, with faster transfer speeds for people on the same network and the ability to give your device a friendly name for easier identification when sharing.
Casting
Casting, the feature that allows you to wirelessly send content from your device to a nearby display, has been enhanced. You will receive notifications suggesting the use of Cast when multitasking, and the Cast menu in quick settings now provides more help in finding nearby displays and fixing connections.
Snap layouts
Snap layouts, the feature that helps you organize the apps on your screen, now allows you to hover over the minimize or maximize button of an app to open the layout box, and to view various layout options. This makes it easier for you to choose the best layout for the task at hand.
New Windows 365 features now available
Windows 365 now offers new features including a new, dedicated mode for Windows 365 Boot that allows you to sign in to your Cloud PC using passwordless authentication. A fast account switching experience has also been added. For Windows 365 Switch, which lets you sign in and connect to your Cloud PC using Windows 11 Task view, you’ll now find it easier to disconnect from your Cloud PC and see desktop indicators to help you easily see whether you are on your Cloud PC or local PC.
For more information, see today’s post, New Windows 365 Boot and Switch features now available.
Unified enterprise update management
We are also releasing enhancements to Windows Autopatch in direct response to your feedback. Several new and upcoming enhancements give you more control, extend the value of your investments, and help you streamline update management, including:
The ability to import Update rings for Windows 10 and later (preview)
Customer defined service outcomes (preview)
Improved data refresh speed and reporting accuracy
Looking ahead, one of the most noticeable changes in Windows Autopatch will be a simplified update management interface that will make the update ecosystem easier to understand. We are unifying our update management offering for enterprise organizations—bringing together Windows Autopatch and the Windows Update for Business deployment service into a single service that enterprise organizations can use to update and upgrade Windows devices as well as update Microsoft 365 Apps, Microsoft Teams, and Microsoft Edge.
We invite you to read our ongoing Windows Autopatch updates in the Windows IT Pro Blog to find out more about richer functionality planned for Windows Autopatch. For the latest, see What’s new in Windows Autopatch: February 2024.
Get familiar with the latest innovations, including Copilot, creator apps, and more
Today’s announcement from Yusuf Mehdi offers more details about new innovations coming to Windows 11 including availability and rollout plans. You can find a summary of all the new enhancements and features in the Windows Update configuration documentation and, as always, stay up to date on rollout plans and known issues (identified and resolved) via the Windows release health dashboard.
Continue the conversation. Find best practices. Bookmark the Windows Tech Community, then follow us @MSWindowsITPro on X/Twitter. Looking for support? Visit Windows on Microsoft Q&A.
Microsoft Tech Community – Latest Blogs –Read More
What is AI? Jared Spataro at the Global Nonprofit Leaders Summit
Jared Spataro, Microsoft Corporate Vice President, AI at Work, presented an engaging keynote at the Global Nonprofit Leaders Summit that left everyone amazed and optimistic about the abilities and simplicity of AI for everyone.
Watch Jared’s session for a walk through that shows how Microsoft Copilot can be a powerful tool for productivity and creativity. From the fun and fantastic, to the practical and powerful, Jared queries Copilot in a real-time demo using his own workstreams in Outlook, Teams, and more:
Can elephants tow a car?
What will the workplace of the future look like?
Can you write a Python script to extract insights from this data?
Can you summarize and prioritize the latest emails from my boss?
Jared shares important tips for prompt engineering, previews the new “Sounds like me” feature to co-create responses in your own voice, and talks about the value of AI being “usefully wrong.”
And he reminds us to say please and thank you.
What did you learn from Jared’s session? How are you using Copilot to enhance creativity and productivity?
Microsoft Tech Community – Latest Blogs –Read More
Updates from 162.1 and 162.2 releases of SqlPackage and the DacFx ecosystem
Within the past 4 months, we’ve had 2 minor releases and a patch release for SqlPackage. In this article, we’ll recap the features and notable changes from SqlPackage 162.1 (October 2023) and 162.2 (February 2024). Several new features focus on giving you more control over the performance of deployments by preventing potential costly operations and opting in to online operations. We’ve also introduced an alternative option for data portability that can provide significant speed improvements to databases in Azure. Read on for information about these improvements and more, all from the recent releases in the DacFx ecosystem. Information on features and fixes is available in the itemized release notes for SqlPackage.
.NET 8 support
The 162.2 release of DacFx and SqlPackage introduces support for .NET 8. SqlPackage installation as a dotnet tool is available with the .NET 6 and .NET 8 SDK. Install or update easily with a single command if the .NET SDK is installed:
# install
dotnet tool install -g microsoft.sqlpackage
# update
dotnet tool update -g microsoft.sqlpackage
Online index operations
Starting with SqlPackage 162.2, online index operations are supported during publish on applicable environments (including Azure SQL Database, Azure SQL Managed Instance, and SQL Server Enterprise edition). Online index operations can reduce the application performance impact of a deployment by supporting concurrent access to the underlying data. For more guidance on online index operations and to determine if your environment supports them, check out the SQL documentation on guidelines for online index operations.
Directing index operations to be performed online across a deployment can be achieved with a command line property new to SqlPackage 162.2, “PerformIndexOperationsOnline”. The property defaults to false, where just as in previous versions of SqlPackage, index operations are performed with the index temporarily offline. If set to true, the index operations in the deployment will be performed online. When the option is requested on a database where online index operations don’t apply, SqlPackage will emit a warning and continue the deployment.
An example of this property in use to deploy index changes online is:
sqlpackage /Action:Publish /SourceFile:yourdatabase.dacpac /TargetConnectionString:”yourconnectionstring” /p:PerformIndexOperationsOnline=True
More granular control over the index operations can be achieved by including the ONLINE=ON/OFF keyword in index definitions in your SQL project. The online property will be included in the database model (.dacpac file) from the SQL project build. Deployment of that object with SqlPackage 162.2 and above will follow the keyword used in the definition, superseding any options supplied to the publish command. This applies to both ONLINE=ON and ONLINE=OFF settings.
DacFx 162.2 is required for SQL project inclusion of ONLINE keywords with indexes and is included with the Microsoft.Build.Sql SQL projects SDK version 0.1.15-preview. For use with non-SDK SQL projects, DacFx 162.2 will be included in future releases of SQL projects in Azure Data Studio, VS Code, and Visual Studio. The updated SDK or SQL projects extension is required to incorporate the index property into the dacpac file. Only SqlPackage 162.2 is required to leverage the publish property “PerformIndexOperationsOnline”.
Block table recreation
With SqlPackage publish operations, you can apply a new desired schema state to an existing database. You define what object definitions you want in the database and pass a dacpac file to SqlPackage, which in turn calculates the operations necessary to update the target database to match those objects. The set of operations are known as a “deployment plan”.
A deployment plan will not destroy user data in the database in the process of altering objects, but it can have computationally intensive steps or unintended consequences when features like change tracking are in use. In SqlPackage 162.1.167, we’ve introduced an optional property, /p:AllowTableRecreation, which allows you to stop any deployments from being carried out that have a table recreation step in the deployment plan.
/p:AllowTableRecreation=true (default) SqlPackage will recreate tables when necessary and use data migration steps to preserve your user data
/p:AllowTableRecreation=false SqlPackage will check the deployment plan for table recreation steps and stop before starting the plan if a table recreation step is included
SqlPackage + Parquet files (preview)
Database portability, the ability to take a SQL database from a server and move it to a different server even across SQL Server and Azure SQL hosting options, is most often achieved through import and export of bacpac files. Reading and writing the singular bacpac files can be difficult when databases are over 100 GB and network latency can be a significant concern. SqlPackage 162.1 introduced the option to move the data in your database with parquet files in Azure Blob Storage, reducing the operation overhead on the network and local storage components of your architecture.
Data movement in parquet files is available through the extract and publish actions in SqlPackage. With extract, the database schema (.dacpac file) is written to the local client running SqlPackage and the data is written to Azure Blob Storage in Parquet format. With publish, the database schema (.dacpac file) is read from the local client running SqlPackage and the data is read from or written to Azure Blob Storage in Parquet format.
The parquet data file feature benefits larger databases hosted in Azure with significantly faster data transfer speeds due to the architecture shift of the data export to cloud storage and better parallelization in the SQL engine. This functionality is in preview for SQL Server 2022 and Azure SQL Managed Instance and can be expected to enter preview for Azure SQL Database in the future. Dive into trying out data portability with dacpacs and parquet files from the SqlPackage documentation on parquet files.
Microsoft.Build.Sql
The Microsoft.Build.Sql library for SDK-style projects continues in the preview development phase and version 0.1.15-preview was just released. Code analysis rules have been enabled for execution during build time with .NET 6 and .NET 8, opening the door to performing quality and performance reviews of your database code on the SQL project. To enable code analysis rules on your project, add the item seen on line 7 of the following sample to your project definition (<RunSqlCodeAnalysis>True</RunSqlCodeAnalysis>).
<Project DefaultTargets=”Build”>
<Sdk Name=”Microsoft.Build.Sql” Version=”0.1.15-preview” />
<PropertyGroup>
<Name>synapseexport</Name>
<DSP>Microsoft.Data.Tools.Schema.Sql.Sql160DatabaseSchemaProvider</DSP>
<ModelCollation>1033, CI</ModelCollation>
<RunSqlCodeAnalysis>True</RunSqlCodeAnalysis>
</PropertyGroup>
</Project>
During build time, the objects in the project will be checked against a default set of code analysis rules. Code analysis rules can be customized through DacFx extensibility.
Ways to get involved
In early 2024, we added preview releases of SqlPackage to the dotnet tool feed, such that not only do you have early access to DacFx changes but you can directly test SqlPackage as well. Get the quick instructions on installing and updating the preview releases in the SqlPackage documentation.
Most of the issues fixed in this release were reported through our GitHub community, and in several cases the person reporting put together great bug reports with reproducing scripts. Feature requests are also discussed within the GitHub community in some cases, including the online index operations and blocking table recreation capabilities. All are welcome to stop by the GitHub repository to provide feedback, whether it is bug reports, questions, or enhancement suggestions.
Microsoft Tech Community – Latest Blogs –Read More
Azure Sphere OS version 24.03 is now available for evaluation
Azure Sphere OS version 24.03 is now available for evaluation in the Retail Eval feed. The retail evaluation period for this release provides 28 days (about 4 weeks) of testing. During this time, please verify that your applications and devices operate properly with this release before it is deployed broadly to devices in the Retail feed.
The 24.03 OS Retail Eval release includes bug fixes and security updates including additional security updates to the cancelled 23.10 release, and a bugfix to a sporadic issue with OS update that caused the cancellation of that release.
For this release, the Azure Sphere OS contains an updated version of cURL. Azure Sphere OS provides long-term ABI compatibility, however the mechanisms of how cURL-multi operates, particularly with regard to recursive calls, have changed since the initial release of the Azure Sphere OS. Microsoft has performed additional engineering to provide backward compatibility for previously compiled applications to accommodate these changes. However, this is a special area of focus for compatibility release during this evaluation.
If your application leverages cURL-multi (as indicated by the usage of the `curl_multi_add_handle ()` API) we would encourage you to perform additional testing against the 24.03 OS. These changes do not impact applications that use the cURL-easy interface (as indicated by the usage of `curl_easy_perform()` API.).
Areas of special focus for compatibility testing with 24.03 include apps and functionality utilizing:
cURL and cURL-multi
wolfSSL, TLS-client, and TLS-server
Azure IoT, DPS, IoT Hub, IoT Central, Digital Twins, C SDK
Mutual Authentication
For more information on Azure Sphere OS feeds and setting up an evaluation device group, see Azure Sphere OS feeds and Set up devices for OS evaluation.
For self-help inquiries or technical support, review the Azure Sphere support options.
Microsoft Tech Community – Latest Blogs –Read More
Don’t miss out! Register now for the upcoming Ability Summit, March 7, 2024.
Discover how AI can fuel your accessibility innovation.
How can AI drive inclusion? In this free digital event, you will learn how to unleash human potential with accessible AI. The newest tech can bridge the accessibility divide and we’re bringing together leading experts who have a deep understanding of disabilities in the workplace. Together, we can embed inclusive solutions and improve opportunities for people across the disability spectrum.
Register here!
Microsoft Tech Community – Latest Blogs –Read More
Partner Blog | 2024 Microsoft Partner events
by Nicole Dezen, Chief Partner Officer and Corporate Vice President, Global Partner Solutions
2023 was a year of exciting growth and innovation here at Microsoft. This year’s focus is to empower our customers and partners through AI transformation, and we’re excited to share what will be an impactful lineup of events for 2024. Attending these events provides you with the opportunity to learn, grow, and make defining connections with experts from around the world.
Microsoft Inspire: the next chapter
Continue reading here
Microsoft Tech Community – Latest Blogs –Read More
Authorization_RequestDenied
Hi team,
We are currently trying to get user email address removed for privacy reasons through the https://graph.microsoft.com/v1.0/users/email address removed for privacy reasons
We get the below error: (Forbidden:Authorization_RequestDenied) Insufficient privileges to complete the operation. Date: 2024-02-28T15:26:23. Request Id: ce9dca50-7dba-44d4-be95-e5a4d14aada0. Client Request Id: ce9dca50-7dba-44d4-be95-e5a4d14aada0.
We have granted the access policy to this user and we do have the correct scopes. Could you look at the logs and let us know what might be the issue here?
Thanks,
Vakul
Hi team, We are currently trying to get user email address removed for privacy reasons through the https://graph.microsoft.com/v1.0/users/email address removed for privacy reasons We get the below error: (Forbidden:Authorization_RequestDenied) Insufficient privileges to complete the operation. Date: 2024-02-28T15:26:23. Request Id: ce9dca50-7dba-44d4-be95-e5a4d14aada0. Client Request Id: ce9dca50-7dba-44d4-be95-e5a4d14aada0. We have granted the access policy to this user and we do have the correct scopes. Could you look at the logs and let us know what might be the issue here? Thanks,Vakul Read More
Azure Virtual Network now supports updates without subnet property
/* Style Definitions */
table.MsoNormalTable
{mso-style-name:”Table Normal”;
mso-tstyle-rowband-size:0;
mso-tstyle-colband-size:0;
mso-style-noshow:yes;
mso-style-priority:99;
mso-style-parent:””;
mso-padding-alt:0in 5.4pt 0in 5.4pt;
mso-para-margin:0in;
mso-pagination:widow-orphan;
font-size:11.0pt;
font-family:”Aptos”,sans-serif;
mso-ascii-font-family:Aptos;
mso-ascii-theme-font:minor-latin;
mso-hansi-font-family:Aptos;
mso-hansi-theme-font:minor-latin;}
table.MsoTableGrid
{mso-style-name:”Table Grid”;
mso-tstyle-rowband-size:0;
mso-tstyle-colband-size:0;
mso-style-priority:39;
mso-style-unhide:no;
border:solid windowtext 1.0pt;
mso-border-alt:solid windowtext .5pt;
mso-padding-alt:0in 5.4pt 0in 5.4pt;
mso-border-insideh:.5pt solid windowtext;
mso-border-insidev:.5pt solid windowtext;
mso-para-margin:0in;
mso-pagination:widow-orphan;
font-size:11.0pt;
font-family:”Aptos”,sans-serif;
mso-ascii-font-family:Aptos;
mso-ascii-theme-font:minor-latin;
mso-hansi-font-family:Aptos;
mso-hansi-theme-font:minor-latin;
mso-font-kerning:1.0pt;
mso-ligatures:standardcontextual;}
Azure API supports the HTTP methods PUT, GET, DELETE for the CRUD (Create/Retrieve/Update/Delete) operations on your resources. The PUT operation is used for both Create and Update. For existing resources, using a PUT with the existing resources preserves them and adds any new resources supplied in the JSON. If any of the existing resources are omitted from the JSON for the PUT operation, those resources are removed from the Azure deployment.
Based on customer support cases and feedback, we observed that this behavior causes problems for customers while performing updates to existing deployments. This is a challenge in the case of subnets in the VNet where any updates to the virtual network, or addition of resources (e.g. adding a routing table), to a virtual network require you to supply the entire virtual network configuration in addition to the subnets. To make it easier for customers, we have implemented a change in the PUT API behavior for virtual network updates. This change allows you to skip the subnet specification in a PUT call without deleting the existing subnets. This capability is now available in a Limited Preview in all the EUAP regions, US West Central and US North with API version 2023-09-01.
Previous behavior
The existing behavior has been to expect a subnet property in the PUT virtual network call. If a subnet property isn’t included, the subnets are deleted. This might not be the intention.
New PUT VNet behavior
Assuming your existing configuration is as follows:
“subnets”: [
{
“name”: “SubnetA”,
“properties”: {…}
},
{
“name”: “SubnetB”,
“properties”: {…}
},
{
“name”: “SubnetC”,
“properties”: {…}
},
{
“name”: “SubnetD”,
“properties”: {…}
}
]
The updated behavior is as follows:
If a PUT virtual network doesn’t include a subnet property, no changes to the existing set of subnets is made.
If subnet property is explicitly marked as empty, we will treat this as a request to delete all the existing subnets. For example:
“subnets”: []
OR
“subnets”: null
If a subnet property is supplied with specific values as follows:
“subnets”: [
{
“name”: “SubnetA”,
“properties”: {…}
},
{
“name”: “Subnet-B”,
“properties”: {…}
},
{
“name”: “Subnet-X”,
“properties”: {…}
}
]
In this case, the following changes are made to the virtual network:
SubnetA is unchanged. Assuming the supplied configuration is the same as existing.
SubnetB, SubnetC and SubnetD are deleted.
Two new subnets Subnet-B and Subnet-X are created with the new configuration.
This behavior remains unchanged from what Azure currently has today.
Next Steps
Test the new behavior in the regions listed above and share your feedback.
Microsoft Tech Community – Latest Blogs –Read More
The ABCs of ADX: Learning the Basics of Azure Data Explorer | Data Exposed: MVP Edition
You may have heard of Azure Data Explorer – but do you know what it does? Do we know the best ways to use it (and the ways we shouldn’t use it)? Do we know some things that it does better than anything else in the Microsoft data platform? Join us for a walkthrough of what Azure Data Explorer is, what it isn’t, and how to leverage it to offer your customers, colleagues, and users another tool in their data toolbox.
Resources:
Learning path: https://learn.microsoft.com/en-us/training/paths/data-analysis-data-explorer-kusto-query-language/
Help Cluster with built-in test data: https://dataexplorer.azure.com/clusters/help
View/share our latest episodes on Microsoft Learn and YouTube!
Microsoft Tech Community – Latest Blogs –Read More
Microsoft Teams Phone empowers frontline workers with smart and reliable communication
Teams Phone is a cloud calling solution that equips your entire workforce with flexible, reliable, and smart calling capabilities, all within Microsoft Teams. Earlier this month, we introduced a new Teams Phone for Frontline Workers offer1 that enables frontline workers to securely communicate with customers, colleagues, or suppliers in Teams.
Teams Phone keeps frontline workers mobile and connected with dedicated numbers and devices, making it a versatile solution for employees in various industries and job functions. For instance, a retail store associate can easily respond to customer inquiries on product information, or nurses can directly connect with their patients from anywhere, across devices. With Teams Phone, you can:
Route calls to the right person at the right time with auto-attendants, call queues, and call delegation.
Communicate securely with patients with electronic health record application integration, call recording, and transcription.
Create meaningful customer engagements with CRM system integration, consultative transfers, and call park.
Set frontline teams up quickly with shared calling, allowing groups of users to make and receive calls with a shared phone number and calling plan.
Simplify communication with common area phones in shared spaces
In today’s fast-paced work environment, effective communication is essential for seamless operations. However, not all frontline workers need a dedicated phone number to perform their tasks. In some scenarios, they may only need to make or receive occasional calls on behalf of a department. Common area phones cater to this need and unlock easy to use calling capabilities for frontline workers. With common area phones, frontline workers can make and receive calls through a shared mobile Android device, or a desk phone assigned to their team or department.
Common area phones in shared spaces have several use cases. A shared device can help retail store associates who are managing incoming calls in the curbside pick-up department, or receptionists in a clinic who are managing appointment scheduling requests. With common area phones, you can:
Route incoming calls efficiently, easily, and exactly where you need them with auto-attendants, call queues, call transfer, shared line appearance, and call park.
Relay important information between teams in real time with Walkie Talkie in Teams as well as hotline phones programmed to dial one number.
How to Get Started
Teams Phone for individual users
Get the Teams Phone for Frontline Workers license¹, available as an add-on to Microsoft 365 F1 and F3.
Use Teams Phone from any device where you’re logged into the Teams app. See the full list of certified devices here.
Learn more about how to set up Teams Phone.
Common area phones
Get the Teams Shared Device license.
Learn more about how to set up desk phones as common area phones or how to set up an Android mobile phone as a common area phone.
¹ Microsoft Teams Phone Standard for Frontline Workers ($4 user/month) will be available as an add-on to Microsoft 365 F1 ($2.25 user/month) and F3 ($8 user/month). Listed pricing may vary due to currency, country, and regional variant factors. Contact your Microsoft sales representative to learn more.
Microsoft Tech Community – Latest Blogs –Read More
Enhanced Performance in Additional Regions: Azure Synapse Analytics Spark Boosted by up to 77%
We are committed to continually advancing the capabilities of Azure Synapse Analytics Spark, and are pleased to announce substantial improvements that could increase Spark performance by as much as 77%.
Performance Metrics
Our internal testing, utilizing the 1TB TPC-H industry standard benchmark, indicates performance gains of up to 77%. It’s important to note that individual workloads may vary, but the enhancements are designed to benefit all Azure Synapse Analytics Spark users.
Technological Foundations
This performance uptick is attributable to our transition to the latest Azure v5 Virtual Machines. These VMs bring improved CPU performance, increased SSD throughput, and elevated remote storage IOPS.
Regional Availability
We have implemented these performance improvements in the following regions, bold indicates a new region:
Australia East
Australia Southeast
Canada Central
Canada East
Central India
Germany West Central
Japan West
Korea Central
Poland Central
South Africa North
South India
Sweden Central
Switzerland North
Switzerland West
UAE North
UK South
UK West
West Central US
Additionally, all Microsoft Fabric regions, with the exception of Qatar Central, are already operating with these enhanced performance capabilities.
Future Rollout
The global rollout of these improvements is an ongoing process and expected to take several quarters to complete. We will provide updates as additional regions are upgraded. Customers in updated regions will automatically benefit from the performance enhancements at no additional cost.
Next Steps for Users
No action is required on your part to benefit from these improvements. Once your region receives the upgrade, you may notice reduced job completion times. If cost-efficiency is a priority, you may opt to decrease node size or the number of nodes while maintaining improved performance levels.
Learn more about Optimizing Spark performance, Apache Spark pool configurations, Spark compute for Data Engineering and Data Science – Microsoft Fabric
Microsoft Tech Community – Latest Blogs –Read More
Stop Worrying and Love the Outage, Vol II: DCs, custom ports, and Firewalls/ACLs
Hello, it’s Chris Cartwright from the Directory Services support team again. This is the second entry in a series where I try to provide the IT community with some tools and verbiage that will hopefully save you and your business many hours, dollars and frustrations. Here we’re going to focus on some major direct and/or indirect changes to Active Directory that tend be pushed onto AD administrators. I want to arm you with the knowledge required for those conversations, and hopefully some successful deflection. After all, isn’t it better to learn the hard lessons from others, if you can?
Periodically, we get cases for replication failures, many times, involving lingering objects. Almost without fail, the cause is one of the following reasons:
DNS
SACL/DACL size
Network communication issues
Network communications issues almost always come down to a “blinky box” in the middle that is doing something it shouldn’t be, whether due to defective hardware/software, misconfiguration or the ever-present misguided configuration. Today, we’re going to focus on the third, a misguided configuration. That is to say, the things that your compliancy section has said must be done, that have little to no security benefit, but can easily result in a multi-hour, multi-day, or even (yes) multi-long outage. To be fair, the portents of an outage should be readily apparent with any monitoring in place. However, sometimes reporting agents are not installed, fail to function properly, or misconfigured or events themselves are missed (alert fatigue). So, one of things to do when compliancy starts talking about locking down DC communications is to ask them…
What is the problem you are trying to solve?
Have you been asked to isolate DCs? Create a lag site? Make sure that X DCs can only replicate with Y DCs?
The primary effect of doing any of this is alert fatigue for replication errors, which is a path to outage later. Additionally, if you have “Bridge all site links” enabled, you are giving the KCC the wrong information to create site links.
Don’t permanently isolate DCs
Don’t create lag sites
Do make sure you have backups in place
Do make sure KCC has the correct information, and then let it be unless your network topology changes.
Do make sure all DCs in the forest can reach all DCs in the forest (If your networks are fully routable)
Have you been asked to configure DCs to restrict the RPC ports they can use?
Every AD administrator should be familiar with the firewall ports and how RPC works. By default, DCs will register on these port ranges and listen for traffic. The RPC Endpoint Mapper keeps track of these ports and tells incoming requests where to go. One thing that RPC Endpoint Mapper will not do is keep track of firewall or ACL changes that were made.
Again, what is the security benefit here? It is one thing to control DC communications outbound from the perimeter. It is another thing to suggest that X port is more secure than Y port, especially when we’re talking about ports upon which DCs are listening. If your compliancy team is worried about rogue applications listening on DCs, you have bigger problems..like rogue applications existing on your DCs, presumably put there by rogue agents who now have control over your entire Active Directory.
The primary effect of “locking down” a DC in this way is not to improve security, but to mandate the creation or modification of some document with fine print, “Don’t forget to add these registry keys to avoid an outage”, that will inevitably be lost during turnover. Furthermore, going too far can lead to port exhaustion, another type of outage.
Don’t restrict AD/Netlogon to static ports without exhaustively discussing the risks involved, and heavily documenting it.
Don’t restrict the RPC dynamic range without exhaustively discussing the risks involved, and heavily documenting it.
Do restrict inbound/outbound perimeter traffic to your DCs.
“Hey, you said multi-day or multi-week outages. It’s not that hard to fix replication!”
It is true that once you’ve found the network issue preventing replication that it is usually an easy fix. However, if the “easy” fix is to rehost all your global catalog partitions with tens of thousands of objects on 90+ DCs, requiring manual administrative intervention, and a specific sequence of commands because your environment is filled with lingering objects, you’re going to be busy for a while.
Wrapping it up
As the venerable Aaron Margosis said, “…if you stick with the Windows defaults wherever possible or industry-standard configurations such as the Microsoft Windows security guidance or the USGCB, and use proven enterprise management technologies instead of creating and maintaining your own, you will increase flexibility, reduce costs, and be better able to focus on your organization’s real mission.”
Security is critical in this day and age, but so is understanding the implications and reasons beyond some check box on an audit document. Monitoring is also critical, but of little use if polluted with noise. Remember who will be mainlining caffeine all night to get operations back online when the lingering objects start rolling in, because it will not be the people that click the “scan” button once a month…
References
Creating a Site Link Bridge Design | Microsoft Learn
You Are Not Smarter Than The KCC | Microsoft Learn
Configure firewall for AD domain and trusts – Windows Server | Microsoft Learn
RPC over IT/Pro – Microsoft Community Hub
Remote Procedure Call (RPC) dynamic port work with firewalls – Windows Server | Microsoft Learn
Restrict Active Directory RPC traffic to a specific port – Windows Server | Microsoft Learn
10 Immutable Laws of Security | Microsoft Learn
Sticking with Well-Known and Proven Solutions | Microsoft Learn
Chris “Was it really worth it” Cartwright
Microsoft Tech Community – Latest Blogs –Read More
Azure DevOps blog closing -> moving to DevBlogs
Hello! We will be closing this Azure DevOps blog soon on Tech Community as part of consolidation efforts. We appreciate your continued readership and interest in this topic.
For Azure DevOps blog posts (including the last 10 posted here), please go here: Azure DevOps Blog (microsoft.com)
Microsoft Tech Community – Latest Blogs –Read More
Creating Intelligent Apps on App Service with .NET
You can use Azure App Service to work with popular AI frameworks like LangChain and Semantic Kernel connected to OpenAI for creating intelligent apps. In the following tutorial we will be adding an Azure OpenAI service using Semantic Kernel to a .NET 8 Blazor web application.
Prerequisites
An Azure OpenAI resource or an OpenAI account.
A .NET 8 Blazor Web App. Create the application with a template here.
Setup Blazor web app
For this Blazor web application, we’ll be building off the Blazor template and creating a new razor page that can send and receive requests to an Azure OpenAI OR OpenAI service using Semantic Kernel.
Right click on the Pages folder found under the Components folder and add a new item named OpenAI.razor
Add the following code to the ****OpenAI.razor file and click Save
“/openai”
@rendermode InteractiveServer
<PageTitle>Open AI</PageTitle>
<h3>Open AI Query</h3>
<input placeholder=”Input query” @bind=”newQuery” />
<button class=”btn btn-primary” @onclick=”SemanticKernelClient”>Send Request</button>
<br />
<h4>Server response:</h4> <p>@serverResponse</p>
@code {
public string? newQuery;
public string? serverResponse;
}
Next, we’ll need to add the new page to the navigation so we can navigate to the service.
Go to the NavMenu.razor file under the Layout folder and add the following div in the nav class. Click Save
<div class=”nav-item px-3″>
<NavLink class=”nav-link” href=”openai”>
<span class=”bi bi-list-nested-nav-menu” aria-hidden=”true”></span> Open AI
</NavLink>
</div>
After the Navigation is updated, we can start preparing to build the OpenAI client to handle our requests.
API Keys and Endpoints
In order to make calls to OpenAI with your client, you will need to first grab the Keys and Endpoint values from Azure OpenAI or OpenAI and add them as secrets for use in your application. Retrieve and save the values for later use.
For Azure OpenAI, see this documentation to retrieve the key and endpoint values. For our application, you will need the following values:
deploymentName
endpoint
apiKey
modelId
For OpenAI, see this documentation to retrieve the api keys. For our application, you will need the following values:
apiKey
modelId
Since we’ll be deploying to App Service we can secure these secrets in Azure Key Vault for protection. Follow the Quickstart to setup your Key Vault and add the secrets you saved from earlier.
Next, we can use Key Vault references as app settings in our App Service resource to reference in our application. Follow the instructions in the documentation to grant your app access to your Key Vault and to setup Key Vault references.
Then, go to the portal Environment Variables blade in your resource and add the following app settings:
For Azure OpenAI, use the following:
DEPOYMENT_NAME = @microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/mysecret/)
ENDPOINT = @microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/mysecret/)
API_KEY = @microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/mysecret/)
MODEL_ID = @microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/mysecret/)
For OpenAI, use the following:
OPENAI_API_KEY = @microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/mysecret/)
OPENAI_MODEL_ID = @microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/mysecret/)
Once your app settings are saved, you can bring them into the code by injecting IConfiguration and referencing the app settings. Add the following code to your OpenAI.razor file:
@inject Microsoft.Extensions.Configuration.IConfiguration _config
@code {
private async Task SemanticKernelClient()
{
string deploymentName = _config[“DEPLOYMENT_NAME”];
string endpoint = _config[“ENDPOINT”];
string apiKey = _config[“API_KEY”];
string modelId = _config[“MODEL_ID”];
// OpenAI
string OpenAIModelId = _config[“OPENAI_MODEL_ID”];
string OpenAIApiKey = _config[“OPENAI_API_KEY”];
}
Semantic Kernel
Semantic Kernel is an open-source SDK that enables you to easily develop AI agents to work with your existing code. You can use Semantic Kernel with Azure OpenAI and OpenAI models.
To create the OpenAI client, we’ll first start by installing Semantic Kernel.
To install Semantic Kernel, browse the NuGet package manager in Visual Studio and install the Microsoft.SemanticKernel package. For NuGet Package Manager instructions, see here. For CLI instructions, see here.
Once the Semantic Kernel package is installed, you can now initialize the kernel.
Initialize the Kernel
To initialize the Kernel, add the following code to the OpenAI.razor file.
@code {
@using Microsoft.SemanticKernel;
private async Task SemanticKernelClient()
{
var builder = Kernel.CreateBuilder();
var kernel = builder.Build();
}
}
Here we are adding the using statement and creating the Kernel in a method that we can use when we send the request to the service.
Add your AI service
Once the Kernel is initialized, we can add our chosen AI service to the kernel. Here we will define our model and pass in our key and endpoint information to be consumed by the chosen model.
For Azure OpenAI, use the following code:
var builder = Kernel.CreateBuilder();
builder.Services.AddAzureOpenAIChatCompletion(
deploymentName: deploymentName,
endpoint: endpoint,
apiKey: apiKey,
modelId: modelId
);
var kernel = builder.Build();
For OpenAI, use the following code:
var builder = Kernel.CreateBuilder();
builder.Services.AddOpenAIChatCompletion(
modelId: OpenAIModelId,
apiKey: OpenAIApiKey,
);
var kernel = builder.Build();
Configure prompt and create Semantic function
Now that our chosen OpenAI service client is created with the correct keys we can add a function to handle the prompt. With Semantic Kernel you can handle prompts by the use of a semantic functions, which turn the prompt and the prompt configuration settings into a function the Kernel can execute. Learn more on configuring prompts here.
First, we’ll create a variable that will hold the users prompt. Then add a function with execution settings to handle and configure the prompt. Add the following code to the OpenAI.razor file:
@using Microsoft.SemanticKernel.Connectors.OpenAI
private async Task SemanticKernelClient()
{
var builder = Kernel.CreateBuilder();
builder.Services.AddAzureOpenAIChatCompletion(
deploymentName: deploymentName,
endpoint: endpoint,
apiKey: apiKey,
modelId: modelId
);
var kernel = builder.Build();
var prompt = @”{{$input}} ” + newQuery;
var summarize = kernel.CreateFunctionFromPrompt(prompt, executionSettings: new OpenAIPromptExecutionSettings { MaxTokens = 100, Temperature = 0.2 });
}
Lastly, we’ll need to invoke the function and return the response. Add the following to the OpenAI.razor file:
private async Task SemanticKernelClient()
{
var builder = Kernel.CreateBuilder();
builder.Services.AddAzureOpenAIChatCompletion(
deploymentName: deploymentName,
endpoint: endpoint,
apiKey: apiKey,
modelId: modelId
);
var kernel = builder.Build();
var prompt = @”{{$input}} ” + newQuery;
var summarize = kernel.CreateFunctionFromPrompt(prompt, executionSettings: new OpenAIPromptExecutionSettings { MaxTokens = 100, Temperature = 0.2 })
var result = await kernel.InvokeAsync(summarize);
serverResponse = result.ToString();
}
Here is the example in it’s completed form. In this example, use the Azure OpenAI chat completion service OR the OpenAI chat completion service, not both.
“/openai”
@rendermode InteractiveServer
@inject Microsoft.Extensions.Configuration.IConfiguration _config
<PageTitle>OpenAI</PageTitle>
<h3>OpenAI input query: </h3>
<input class=”col-sm-4″ @bind=”newQuery” />
<button class=”btn btn-primary” @onclick=”SemanticKernelClient”>Send Request</button>
<br />
<br />
<h4>Server response:</h4> <p>@serverResponse</p>
@code {
@using Microsoft.SemanticKernel;
@using Microsoft.SemanticKernel.Connectors.OpenAI
private string? newQuery;
private string? serverResponse;
private async Task SemanticKernelClient()
{
// Azure OpenAI
string deploymentName = _config[“DEPLOYMENT_NAME”];
string endpoint = _config[“ENDPOINT”];
string apiKey = _config[“API_KEY”];
string modelId = _config[“MODEL_ID”];
// OpenAI
// string OpenAIModelId = _config[“OPENAI_DEPLOYMENT_NAME”];
// string OpenAIApiKey = _config[“OPENAI_API_KEY”];
// Semantic Kernel client
var builder = Kernel.CreateBuilder();
// Azure OpenAI
builder.Services.AddAzureOpenAIChatCompletion(
deploymentName: deploymentName,
endpoint: endpoint,
apiKey: apiKey,
modelId: modelId
);
// OpenAI
// builder.Services.AddOpenAIChatCompletion(
// modelId: OpenAIModelId,
// apiKey: OpenAIApiKey
// );
var kernel = builder.Build();
var prompt = @”{{$input}} ” + newQuery;
var summarize = kernel.CreateFunctionFromPrompt(prompt, executionSettings: new OpenAIPromptExecutionSettings { MaxTokens = 100, Temperature = 0.2 });
var result = await kernel.InvokeAsync(summarize);
serverResponse = result.ToString();
}
}
Now save the application and follow the next steps to deploy it to App Service. If you would like to test it locally first at this step, you can swap out the config values at with the literal string values of your OpenAI service. For example: string modelId = “gpt-4-turbo”;
Deploy to App Service
If you have followed the steps above, you are ready to deploy to App Service. If you run into any issues remember that you need to have done the following: grant your app access to your Key Vault, add the app settings with key vault references as your values. App Service will resolve the app settings in your application that match what you’ve added in the portal.
Authentication
Although optional, it is highly recommended that you also add authentication to your web app when using an Azure OpenAI or OpenAI service. This can add a level of security with no additional code. Learn how to enable authentication for your web app here.
Once deployed, browse to the web app and navigate to the Open AI tab. Enter a query to the service and you should see a populated response from the server. The tutorial is now complete and you now know how to use OpenAI services to create intelligent applications.
Microsoft Tech Community – Latest Blogs –Read More
MDTI Earns Impactful Trio of ISO Certificates
We are excited to announce that Microsoft Defender Threat Intelligence (MDTI) has achieved ISO 27001, ISO 27017 and ISO 27018 certifications. The ISO, the International Organization for Standardization, develops market relevant international standards that support innovation and provide solutions to global challenges, including information security requirements around establishing, implementing, and improving an Information Security Management System (ISM).
These certificates emphasize the MDTI team’s continuous commitment to protecting customer information and following the strictest standards of security and privacy standards.
Certificate meaning and importance
ISO 27001: This certification demonstrates compliance of MDTI’s ISMS with best practices of industry, thereby providing a structured approach towards risk management pertaining to information security.
ISO 27017: This certificate is a worldwide standard that provides guidance on securing information in the cloud. It demonstrates that we have put in place strong controls and countermeasures to ensure our customers’ data is safe when stored in the cloud.
ISO 27018: This certificate sets out common objectives, controls and guidelines for protecting personally identifiable information (PII) processed in public clouds consistent with the privacy principles outlined in ISO 29100. This is confirmed by our ISO 27018 certification, which shows that we are committed to respecting our customers’ privacy rights and protecting their personal data through cloud computing.
What are the advantages of these certifications for our customers?
Enhanced Safety and Privacy Assurance: Our customers can be confident that the most sophisticated and exhaustive security and privacy standards offered in the market are in place to protect their data. We have ensured we exceed these certifications; therefore, their information is secure from emerging threats.
Reduced Risk and Liability Exposure: Through our certified ISMs and Privacy Information Management System (PIMS), consumers can significantly reduce liability for potential data breaches, legal actions, regulatory fines, or reputational risks. They use our efficient structures to boost resistance against cybercrime to reduce the risk of lawsuits.
Streamlined Compliance and Competitive Edge: The clients’ industry or market-specific rigorous regulatory and contractual requirements are usually facilitated by our certification programs. Global accreditation of international standards signifies that organizations are serious when it comes to data security. Their job reputation improves plus they get options for teaming up with other businesses that value safeguarding privacy.
What are the steps to begin with MDTI?
If you are interested in learning more about MDTI and how it can help you unmask and neutralize modern adversaries and cyberthreats such as ransomware, and to explore the features and benefits of MDTI please visit the MDT product web page.
Also, be sure to contact our sales team to request a demo or a quote.
Microsoft Tech Community – Latest Blogs –Read More
Mistral Large, Mistral AI’s flagship LLM, debuts on Azure AI Models-as-a-Service
Microsoft is partnering with Mistral AI to bring its Large Language Models (LLMs) to Azure. Mistral AI’s OSS models, Mixtral-8x7B and Mistral-7B, were added to the Azure AI model catalog last December. We are excited to announce the addition of Mistral AI’s new flagship model, Mistral Large to the Mistral AI collection of models in the Azure AI model catalog today. The Mistral Large model will be available through Models-as-a-Service (MaaS) that offers API-based access and token based billing for LLMs, making it easier to build Generative AI apps. Developers can provision an API endpoint in a matter of seconds and try out the model in the Azure AI Studio playground or use it with popular LLM app development tools like Azure AI prompt flow and LangChain. The APIs support two layers of safety – first, the model has built-in support for a “safe prompt” parameter and second, Azure AI content safety filters are enabled to screen for harmful content generated by the model, helping developers build safe and trustworthy applications.
The Mistral Large model
Mistral Large is Mistral AI’s most advanced Large Language Model (LLM), first available on Azure and the Mistral AI platform. It can be used on a full range of language-based task thanks to its state-of-the-art reasoning and knowledge capabilities. Key attributes:
Specialized in RAG: Crucial information is not lost in the middle of long context windows. Supports up to 32K tokens.
Strong in coding: Code generation, review and comments with support for all mainstream coding languages.
Multi-lingual by design: Best-in-class performance in French, German, Spanish, and Italian – in addition to English. Dozens of other languages are supported.
Responsible AI: Efficient guardrails baked in the model, with additional safety layer with safe prompt option.
Benchmarks
You can read more about the model and review evaluation results on Mistral AI’s blog: https://mistral.ai/news/mistral-large. The Benchmarks hub in Azure offers a standardized set of evaluation metrics for popular models including Mistral’s OSS models and Mistral Large.
Using Mistral Large on Azure AI
Let’s take care of the prerequisites first:
If you don’t have an Azure subscription, get one here: https://azure.microsoft.com/en-us/pricing/purchase-options/pay-as-you-go
Create an Azure AI Studio hub and project. Make sure you pick East US 2 or France Central as the Azure region for the hub.
Next, you need to create a deployment to obtain the inference API and key:
Open the Mistral Large model card in the model catalog: https://aka.ms/aistudio/landing/mistral-large
Click on Deploy and pick the Pay-as-you-go option.
Subscribe to the Marketplace offer and deploy. You can also review the API pricing at this step.
You should land on the deployment page that shows you the API and key in less than a minute. You can try out your prompts in the playground.
The prerequisites and deployment steps are explained in the product documentation: https://learn.microsoft.com/en-us/azure/ai-studio/how-to/deploy-models-mistral.
You can use the API and key with various clients. Review the API schema if you are looking to integrate the REST API with your own client: https://learn.microsoft.com/en-us/azure/ai-studio/how-to/deploy-models-mistral#reference-for-mistral-large-deployed-as-a-service. Let’s review samples for some popular clients.
Basic CLI with curl and Python web request sample: https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/mistral/webrequests.ipynb
Mistral clients: Azure APIs for Mistral Large are compatible with the API schema offered on the Mistral AI ‘s platform which allows you to use any of the Mistral AI platform clients with Azure APIs. Sample notebook for the Mistral python client: https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/mistral/mistralai.ipynb
LangChain: API compatibility also enables you to use the Mistral AI’s Python and JavaScript LangChain integrations. Sample LangChain notebook: https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/mistral/langchain.ipynb
LiteLLM: LiteLLM is easy to get started and offers consistent input/output format across many LLMs. Sample LiteLLM notebook: https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/mistral/litellm.ipynb
Prompt flow: Prompt flow offers a web experience in Azure AI Studio and VS code extension to build LLM apps with support for authoring, orchestration, evaluation and deployment. Learn more: https://learn.microsoft.com/en-us/azure/ai-studio/how-to/prompt-flow. Out-of-the-box support for Mistral AI APIs on Azure is coming soon, but you can create a custom connection using the API and key, and use the SDK of your choice Python tool in prompt flow.
Develop with integrated content safety
Mistral AI APIs on Azure come with two layered safety approach – instructing the model through the system prompt and an additional content filtering system that screens prompts and completions for harmful content. Using the safe_prompt parameter prefixes the system prompt with a guardrail instruction as documented here. Additionally, the Azure AI content safety system that consists of an ensemble of classification models screens for specific types of harmful content. The external system is designed to be effective against adversarial prompts attacks such as prompts that ask the model to ignore previous instructions. When the content filtering system detects harmful content, you will receive either an error if the prompt was classified as harmful or the response will be partially or completely truncated with an appropriate message when the output generated is classified as harmful. Make sure you account for these scenarios where the content returned by the APIs is filtered when building your applications.
FAQs
What does it cost to use Mistral Large on Azure?
You are billed based on the number of prompt and completions tokens. You can review the pricing on the Mistral Large offer in the Marketplace offer details tab when deploying the model. You can also find the pricing on the Azure Marketplace: https://azuremarketplace.microsoft.com/en-us/marketplace/apps/000-000.mistral-ai-large-offer
Do I need GPU capacity in my Azure subscription to use Mistral Large?
No. Unlike the Mistral AI OSS models that deploy to VMs with GPUs using Online Endpoints, the Mistral Large model is offered as an API. Mistral Large is a premium model whose weights are not available, so you cannot deploy it to a VM yourself.
This blog talks about the Mistral Large experience in Azure AI Studio. Is Mistral Large available in Azure Machine Learning Studio?
Yes, Mistral Large is available in the Model Catalog in both Azure AI Studio and Azure Machine Learning Studio.
Does Mistral Large on Azure support function calling and Json output?
The Mistral Large model can do function calling and generate Json output, but support for those features will roll out soon on the Azure platform.
Mistral Large is listed on the Azure Marketplace. Can I purchase and use Mistral Large directly from Azure Marketplace?
Azure Marketplace enables the purchase and billing of Mistral Large, but the purchase experience can only be accessed through the model catalog. Attempting to purchase Mistral Large from the Marketplace will redirect you to Azure AI Studio.
Given that Mistral Large is billed through the Azure Marketplace, does it retire my Azure consumption commitment (aka MACC)?
Yes, Mistral Large is an “Azure benefit eligible” Marketplace offer, which indicates MACC eligibility. Learn more about MACC here: https://learn.microsoft.com/en-us/marketplace/azure-consumption-commitment-benefit
Is my inference data shared with Mistral AI?
No, Microsoft does not share the content of any inference request or response data with Mistral AI.
Are there rate limits for the Mistral Large API on Azure?
Mistral Large API comes with 200k tokens per minute and 1k requests per minute limit. Reach out to Azure customer support if this doesn’t suffice.
Are Mistral Large Azure APIs region specific?
Mistral Large API endpoints can be created in AI Studio projects to Azure Machine Learning workspaces in East US 2 or France Central Azure regions. If you want to use Mistral Large in prompt flow in project or workspaces in other regions, you can use the API and key as a connection to prompt flow manually. Essentially, you can use the API from any Azure region once you create it in East US 2 or France Central.
Can I fine-tune Mistal Large?
Not yet, stay tuned…
Supercharge your AI apps with Mistral Large today. Head over to AI Studio model catalog to get started.
Microsoft Tech Community – Latest Blogs –Read More
Mistal Large, Mistral AI’s flagship LLM, debuts on Azure AI Models-as-a-Service
Microsoft is partnering with Mistral AI to bring its Large Language Models (LLMs) to Azure. Mistral AI’s OSS models, Mixtral-8x7B and Mistral-7B, were added to the Azure AI model catalog last December. We are excited to announce the addition of Mistral AI’s new flagship model, Mistral Large to the Mistral AI collection of models in the Azure AI model catalog today. The Mistral Large model will be available through Models-as-a-Service (MaaS) that offers API-based access and token based billing for LLMs, making it easier to build Generative AI apps. Developers can provision an API endpoint in a matter of seconds and try out the model in the Azure AI Studio playground or use it with popular LLM app development tools like Azure AI prompt flow and LangChain. The APIs support two layers of safety – first, the model has built-in support for a “safe prompt” parameter and second, Azure AI content safety filters are enabled to screen for harmful content generated by the model, helping developers build safe and trustworthy applications.
The Mistral Large model
Mistral Large is Mistral AI’s most advanced Large Language Model (LLM), first available on Azure and the Mistral AI platform. It can be used on a full range of language-based task thanks to its state-of-the-art reasoning and knowledge capabilities. Key attributes:
Specialized in RAG: Crucial information is not lost in the middle of long context windows. Supports up to 32K tokens.
Strong in coding: Code generation, review and comments with support for all mainstream coding languages.
Multi-lingual by design: Best-in-class performance in French, German, Spanish, and Italian – in addition to English. Dozens of other languages are supported.
Responsible AI: Efficient guardrails baked in the model, with additional safety layer with safe prompt option.
Benchmarks
You can read more about the model and review evaluation results on Mistral AI’s blog: https://mistral.ai/news/mistral-large. The Benchmarks hub in Azure offers a standardized set of evaluation metrics for popular models including Mistral’s OSS models and Mistral Large.
Using Mistral Large on Azure AI
Let’s take care of the prerequisites first:
If you don’t have an Azure subscription, get one here: https://azure.microsoft.com/en-us/pricing/purchase-options/pay-as-you-go
Create an Azure AI Studio hub and project. Make sure you pick East US 2 or France Central as the Azure region for the hub.
Next, you need to create a deployment to obtain the inference API and key:
Open the Mistral Large model card in the model catalog: https://aka.ms/aistudio/landing/mistral-large
Click on Deploy and pick the Pay-as-you-go option.
Subscribe to the Marketplace offer and deploy. You can also review the API pricing at this step.
You should land on the deployment page that shows you the API and key in less than a minute. You can try out your prompts in the playground.
The prerequisites and deployment steps are explained in the product documentation: https://learn.microsoft.com/en-us/azure/ai-studio/how-to/deploy-models-mistral.
You can use the API and key with various clients. Review the API schema if you are looking to integrate the REST API with your own client: https://learn.microsoft.com/en-us/azure/ai-studio/how-to/deploy-models-mistral#reference-for-mistral-large-deployed-as-a-service. Let’s review samples for some popular clients.
Basic CLI with curl and Python web request sample: https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/mistral/webrequests.ipynb
Mistral clients: Azure APIs for Mistral Large are compatible with the API schema offered on the Mistral AI ‘s platform which allows you to use any of the Mistral AI platform clients with Azure APIs. Sample notebook for the Mistral python client: https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/mistral/mistralai.ipynb
LangChain: API compatibility also enables you to use the Mistral AI’s Python and JavaScript LangChain integrations. Sample LangChain notebook: https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/mistral/langchain.ipynb
LiteLLM: LiteLLM is easy to get started and offers consistent input/output format across many LLMs. Sample LiteLLM notebook: https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/mistral/litellm.ipynb
Prompt flow: Prompt flow offers a web experience in Azure AI Studio and VS code extension to build LLM apps with support for authoring, orchestration, evaluation and deployment. Learn more: https://learn.microsoft.com/en-us/azure/ai-studio/how-to/prompt-flow. Out-of-the-box support for Mistral AI APIs on Azure is coming soon, but you can create a custom connection using the API and key, and use the SDK of your choice Python tool in prompt flow.
Develop with integrated content safety
Mistral AI APIs on Azure come with two layered safety approach – instructing the model through the system prompt and an additional content filtering system that screens prompts and completions for harmful content. Using the safe_prompt parameter prefixes the system prompt with a guardrail instruction as documented here. Additionally, the Azure AI content safety system that consists of an ensemble of classification models screens for specific types of harmful content. The external system is designed to be effective against adversarial prompts attacks such as prompts that ask the model to ignore previous instructions. When the content filtering system detects harmful content, you will receive either an error if the prompt was classified as harmful or the response will be partially or completely truncated with an appropriate message when the output generated is classified as harmful. Make sure you account for these scenarios where the content returned by the APIs is filtered when building your applications.
FAQs
What does it cost to use Mistral Large on Azure?
You are billed based on the number of prompt and completions tokens. You can review the pricing on the Mistral Large offer in the Marketplace offer details tab when deploying the model. You can also find the pricing on the Azure Marketplace: https://azuremarketplace.microsoft.com/en-us/marketplace/apps/000-000.mistral-ai-large-offer
Do I need GPU capacity in my Azure subscription to use Mistral Large?
No. Unlike the Mistral AI OSS models that deploy to VMs with GPUs using Online Endpoints, the Mistral Large model is offered as an API. Mistral Large is a premium model whose weights are not available, so you cannot deploy it to a VM yourself.
This blog talks about the Mistral Large experience in Azure AI Studio. Is Mistral Large available in Azure Machine Learning Studio?
Yes, Mistral Large is available in the Model Catalog in both Azure AI Studio and Azure Machine Learning Studio.
Does Mistral Large on Azure support function calling and Json output?
The Mistral Large model can do function calling and generate Json output, but support for those features will roll out soon on the Azure platform.
Mistral Large is listed on the Azure Marketplace. Can I purchase and use Mistral Large directly from Azure Marketplace?
Azure Marketplace enables the purchase and billing of Mistral Large, but the purchase experience can only be accessed through the model catalog. Attempting to purchase Mistral Large from the Marketplace will redirect you to Azure AI Studio.
Given that Mistral Large is billed through the Azure Marketplace, does it retire my Azure consumption commitment (aka MACC)?
Yes, Mistral Large is an “Azure benefit eligible” Marketplace offer, which indicates MACC eligibility. Learn more about MACC here: https://learn.microsoft.com/en-us/marketplace/azure-consumption-commitment-benefit
Is my inference data shared with Mistral AI?
No, Microsoft does not share the content of any inference request or response data with Mistral AI.
Are there rate limits for the Mistral Large API on Azure?
Mistral Large API comes with 200k tokens per minute and 1k requests per minute limit. Reach out to Azure customer support if this doesn’t suffice.
Are Mistral Large Azure APIs region specific?
Mistral Large API endpoints can be created in AI Studio projects to Azure Machine Learning workspaces in East US 2 or France Central Azure regions. If you want to use Mistral Large in prompt flow in project or workspaces in other regions, you can use the API and key as a connection to prompt flow manually. Essentially, you can use the API from any Azure region once you create it in East US 2 or France Central.
Can I fine-tune Mistal Large?
Not yet, stay tuned…
Supercharge your AI apps with Mistral Large today. Head over to AI Studio model catalog to get started.
Microsoft Tech Community – Latest Blogs –Read More