Category: Microsoft
Category Archives: Microsoft
Live at Build: Microsoft Learn releases new AI skill-building resources
Microsoft Learn is excited to be at Microsoft Build again this year with a fantastic new onsite presence and to share announcements about new resources to support AI skill-building.
When it comes to AI, having the right resources to develop critical new skills can be a game changer, whether you’re managing your organization’s training needs or advancing your own career. The 2024 Work Trend Index Annual report from Microsoft and LinkedIn suggests a massive opportunity for those willing to skill up in AI—66% of leaders say they wouldn’t hire someone without AI skills.
If you’re a developer learning to build AI-powered solutions, a team lead looking to skill up a team, or a leader looking to understand the benefits that Microsoft Copilot can bring to your organization, Microsoft Learn has something for you. That’s why we’re thrilled to announce the new AI skill-building resources we’re releasing today at Microsoft Build:
NEW AI Applied Skills releasing in May and June.
NEW Plans for AI skill-building.
NEW Copilot learning hub.
Additionally, I’m pleased to introduce two new AI skill-building offerings designed for non-technical roles:
NEW Copilot for Microsoft 365 training sessions for business users.
COMING SOON AI instructor-led training for business leaders.
Read on for more details about these exciting announcements.
Growing the Microsoft Applied Skills for AI portfolio
We developed Microsoft Applied Skills, new verifiable credentials that validate specific real-world skills, to help you address your skills gaps and empower you with the in-demand expertise you need. The positive feedback we’re receiving about the great value these credentials offer to individuals and organizations motivates us to keep expanding the portfolio.
During May and June we’re releasing new Applied Skills credentials to support developers who build AI and cloud solutions, including:
Develop AI agents using Microsoft Azure OpenAI and Semantic Kernel
Implement a data science and machine learning solution with Microsoft Fabric
Implement a Real-Time Intelligence solution with Microsoft Fabric
We’re also releasing new credentials for key cloud scenarios relevant to IT professionals:
Administer Active Directory Domain Services
Deploy and manage Microsoft Azure Arc–enabled servers
Explore Microsoft Applied Skills
The current portfolio of Microsoft Credentials includes over 20 Microsoft Applied Skills and close to 50 industry-recognized Microsoft Certifications, providing you with verifiable skill sets aligned with AI and cloud job roles and projects. Learn more about Microsoft Credentials.
Stay focused on your AI skill-building with new Plans
To stay current with today’s job skills, it’s important to have the right training content. Organizational team leaders and trainers must have the ability to customize and share this content, encourage their learners to stay on track, and monitor learning progress.
Today we’re introducing new AI skill-building Plans on Microsoft Learn, designed to meet all these objectives and more. Plans help learners, teams, and organizations accelerate the achievement of their learning goals using curated sets of structured content combined with milestones and automated nudges to keep learners focused and motivated. Get all the details about Plans in our recent blog post Introducing Plans on Microsoft Learn.
Find our new AI Plans on the AI learning hub on Microsoft Learn:
Master the basics of Azure: AI Fundamentals
Microsoft Copilot for Microsoft 365 for executives
Using AI in your everyday work: GitHub Copilot
Learn to create apps and modernize with Azure OpenAI
Check out the new Copilot learning hub
We’re also excited to announce the new Copilot learning hub on Microsoft Learn, the place where technology professionals can find resources—tailored to their job role and career goals—to help them develop the skills to put Microsoft Copilot to work every day.
As a complement to the already existing AI learning hub, this new hub offers tutorials, videos, and documentation covering the basics of Copilot, along with its features, capabilities, prompting techniques, best practices, and troubleshooting tips. The learning hub also showcases real-world examples and use cases of Copilot in different domains and scenarios, including content specific to developers, data and IT professionals, security analysts, and more.
Microsoft Learn is here to support your AI learning goals, whatever they may be. Choose the AI learning hub when looking to gain skills in all Microsoft’s AI apps and services, regardless of your business or technical role. Choose the Copilot learning hub when looking to deepen your technical expertise in Microsoft Copilot.
Visit the Copilot learning hub
New live Microsoft Copilot for Microsoft 365 training sessions for business users
The widespread adoption of AI across organizations requires a new approach to skill-building that focuses on upskilling all staff, from leadership and IT to business users, enabling them to fully leverage their AI investments.
I’m pleased to announce a new series of live Microsoft Copilot for Microsoft 365 training sessions for business users designed to help key roles in your organization learn how to use Microsoft Copilot for Microsoft 365 to unlock productivity. Each session is delivered in less than one hour and is available in multiple languages and time zones.
The training content is tailored to the following roles:
Executives—Learn how Copilot can synthesize communication history in Teams and create speeches and presentations with Word and PowerPoint.
Sales—Learn how Copilot helps with market research, reports, and recommendations. Use it for sales deals, contracts, and more.
IT—Learn how to use Copilot to summarize a product spec document, create a project plan and business presentation, and draft an email with highlights for a network security product.
Marketing—Learn how to use Copilot to analyze market trends, forecast sales, generate campaign ideas, and consolidate reports.
Finance—Learn how to use Copilot to analyze a spreadsheet with projected revenue, create a marketing campaign report, and summarize your company’s financial statement results.
HR—Learn how to use Copilot to create a job description, analyze multiple resumes, create interview questions and a candidate report, and compose an offer letter to a candidate.
Ops—Learn how to use Copilot to brainstorm a project plan, locate and summarize email threads, troubleshoot equipment issues, and create customer discovery questions.
Explore Microsoft Copilot for Microsoft 365 training sessions
Instructor-led training coming soon: Microsoft AI for business leaders
Microsoft Learn is also releasing our latest instructor-led training (ILT) called Microsoft AI for business leaders, which is designed to help business leaders find the knowledge and resources to adopt AI in their organizations. The training explores planning, strategizing, and scaling AI projects in a responsible way, focusing on use cases, tools, and insights from industry-specific AI success stories such as healthcare, finance, sustainability, retail, and manufacturing.
This new AI-focused training will be available in July 2024 through select Training Services Partners (TSP) with the expertise to deliver unique value to business leaders. Authorized TSPs offer a breadth of training solutions including blended learning, in-person, and online to meet your learning objectives.
Stay tuned for more information about this new AI instructor-led training.
Find AI-ready Training Services Partners
Explore AI skill-building with Microsoft Learn
Microsoft Learn is leading the way in bringing the latest AI skilling and credentials to our community of learners. We’ll continue to help you gain the skills you need to achieve more with technology, through interactive training and resources on Microsoft products and services. We look forward to sharing more news and updates in the coming weeks.
Continue your learning journey beyond Build at Microsoft Learn.
Microsoft Tech Community – Latest Blogs –Read More
Announcing Custom Categories in Azure AI Content Safety
We are excited to announce that Custom Categories is coming soon to Azure AI Content Safety. This new feature enables you to create your own customized classifier based on your specific needs for content filtering and AI safety whether you want to detect sensitive content, moderate user-generated content, or comply with local regulations. Use Custom Categories to train and deploy your own custom content filter with ease and flexibility.
Feature Overview
The Azure AI Content Safety custom categories feature is powered by Azure AI Language, a service that provides advanced natural language processing capabilities for text analysis and generation. The custom categories feature is designed to provide a streamlined process for creating, training, and using custom content classification models.
Here’s an overview of the underlying workflow:
Deploy your custom category when you need it
We are offering two deployment options for our customers:
Custom Categories (Standard):
The Standard option for deploying custom categories is aimed at providing a thorough and robust filtering mechanism. It requires a minimum of 50 lines of natural language examples to train the category. This depth of training material ensures that the custom filter is well-equipped to identify and moderate the specified types of content accurately.
Deployment Timeframe: The Standard option is designed with a deployment window of within 24 hours, balancing speed with the need for a comprehensive understanding of the content to be filtered.
Custom Categories (Rapid):
The Rapid option caters to urgent content safety needs, allowing organizations to respond swiftly to emerging threats and incidents. It requires a definition and few natural language examples for deploying the text incident, or few example images for deploying the image incident. This reduced requirement facilitates quicker creation and deployment of custom filters.
Deployment Timeframe: This option emphasizes speed, enabling the deployment of new custom filters around just an hour for text, and few minutes for image. It is particularly useful for addressing immediate and unforeseen content safety challenges.
Both options serve to empower organizations with the capability to protect their AI applications and users more effectively against a wide array of harmful content and security risks, offering a balance between responsiveness and thoroughness based on the specific needs and circumstances.
How to use this feature?
Step 1: Definition and Setup
By creating a custom category, you are telling the AI exactly which types of content you wish to detect and mitigate. You need to create a clear category name and a detailed definition that encapsulates the content’s characteristics. The setup phase is crucial, as it lays the groundwork for the AI to understand your specific filtering needs.
Then, collect a balanced and small dataset with both positive and (optional) negative examples allows the AI to learn the nuances of the category. This data should be representative of the variety of content that the model will encounter in a real-world scenario.
Step 2: Model Training
Once you have your dataset ready, the Azure AI Content Safety service uses it to train a new model. During training, the AI analyzes the data, learning to distinguish between content that matches the custom category and content that does not. Built on top of the underlying technology of LLM-powered low-touch customization from Azure AI Language, we are tailoring the experience for Content Safety customer towards consistency and more focus on content moderation scenario.
Step 3: Model Inferencing
After training, you need to evaluate the model to ensure it meets your accuracy requirements. This is done by testing the model with new content that it hasn’t seen before. The evaluation phase helps you identify any potential adjustments needed before deploying the model into a production environment.
Step 4: Iteration
In the upcoming release of custom categories studio experience, we will introduce a feature that allows users to modify their definition and training samples using suggestions generated by GPT.
Join our customers using Custom Categories
South Australia Department for Education
“The Custom Categories feature from Azure AI Content Safety is set to be a game-changer for the Department for Education in South Australia, and our pioneering AI chatbot, EdChat. This new feature allows us to tailor content moderation to our specific standards, ensuring a safer and more appropriate experience for users. It’s a significant step towards prioritizing the safety and well-being of our students in the digital educational space.”
– Dan Hughes, Chief Information Officer, South Australia Department for Education
Learn more about how South Australia Department for Education is using Azure AI Content Safety
Stay tuned!
Thank you for your support as we continue to enhance our platform. We are excited for you to begin using custom categories. Stay tuned for more updates and announcements on our progress.
In the meantime, we encourage you to visit our Content Safety documentation or studio to explore the existing capabilities available to you. Custom categories is also coming soon to Azure AI Studio and Azure OpenAI Service.
Microsoft Tech Community – Latest Blogs –Read More
Introducing in-database embedding generation for Azure Database for PostgreSQL
Introducing in-database embedding generation for Azure Database for PostgreSQL:
via the azure_local_ai extension to Azure Database for PostgreSQL
We are excited to announce the public preview release of azure_local_ai, a new extension for Azure Database for PostgreSQL that enables you to create text embeddings from a model deployed within the same VM as your PostgreSQL database.
Vector embeddings enable AI models to better understand relationships and similarities between data, which is key for intelligent apps. Azure Database for PostgreSQL is proud to be the industry’s that has in-database embedding generation with a text embedding model deployed within the PostgreSQL boundary. can be generated right within the database – offering,
single-digit millisecond latency
predictable costs
confidence that data will remain compliant for confidential workloads
In this release, the extension will deploy a single model, multilingual-e5-small, to your Azure Database for PostgreSQL Flexible Server instance. The first time an embedding is created, the model is loaded into memory. Preview terms for the azure_local_ai extension.
azure_local_ai extension – Preview
Generate embeddings from within the database with a single line of SQL code invoking a UDF.
Harness the power of a text embedding model alongside your operational data without leaving your PostgreSQL database boundary.
During this public preview, the azure_local_ai extension will be available in these Azure regions,
East USA
West USA
West Europe
UK South
France Central
Japan East
Australia East
How does the azure_local_ai extension work?
In-database embedding architecture
ONNX Runtime Configuration
– azure_local_ai supports reviewing the configuration parameters of ONNX Runtime thread-pool within the ONNX Runtime Service. Changes are not currently allowed. See ONNX Runtime performance tuning.
Valid values for the key are:
– intra_op_parallelism: Sets total number of threads used for parallelizing single operator by ONNX Runtime thread-pool. By default, we maximize the number of intra ops threads as much as possible as it improves the overall throughput much (half of the available CPUs by default).
– inter_op_parallelism: Sets total number of threads used for computing multiple operators in parallel by ONNX Runtime thread-pool. By default, we set it to minimum possible thread, which is 1. Increasing it often hurts performance due to frequent context switches between threads.
– spin_control: Switches ONNX Runtime thread-pool’s spinning for requests. When disabled, it uses less cpu and hence causes more latency. By default, it is set to true (enabled).
SELECT azure_local_ai.get_setting(key TEXT);
Generate embeddings
The azure_local_ai extension for Azure Database for PostgreSQL makes it easy to generate an embedding from a simple inline UDF call in your SQL statement passing the model name and the data input to generate the embedding.
— Single embedding
SELECT azure_local_ai.create_embeddings(‘multilingual-e5-small:v1’, ‘Vector embeddings power GenAI applications’);
— Simple array embedding
SELECT azure_local_ai.create_embeddings(‘multilingual-e5-small:v1’, array[‘Recommendation System with Azure Database for PostgreSQL – Flexible Server and Azure OpenAI.’, ‘Generative AI with Azure Database for PostgreSQL – Flexible Server.’]);
Here’s a quick example that demonstrates:
Adding a vector column to a table with a default that generates an embedding and stores it when data is inserted.
Creating an HNSW index.
Completing a semantic search by generating an embedding for a search string and comparing with stored vectors with a cosine similarity search.
–Create docs table
CREATE TABLE docs(doc_id INT GENERATED ALWAYS AS IDENTITY PRIMARY KEY, doc TEXT NOT NULL, last_update TIMESTAMPTZ DEFAULT NOW());
— Add a vector column and generate vector embeddings from locally deployed model
ALTER TABLE docs
ADD COLUMN doc_vector vector(384) — multilingual-e5 embeddings are 384 dimensions
GENERATED ALWAYS AS — Generated on inserts
(azure_local_ai.create_embeddings(‘multilingual-e5-small:v1’, doc)::vector) STORED; — TEXT string sent to local model
— Create a HNSW index
CREATE INDEX ON docs USING hnsw (doc_vector vector_ip_ops);
–Insert data into the docs table
INSERT INTO docs(doc) VALUES (‘Create in-database embeddings with azure_local_ai extension.’),
(‘Enable RAG patterns with in-database embeddings and vectors on Azure Database for PostgreSQL – Flexible server.’), (‘Generate vector embeddings in PostgreSQL with azure_local_ai extension.’),(‘Generate text embeddings in PostgreSQL for retrieval augmented generation (RAG) patterns with azure_local_ai extension and locally deployed LLM.’), (‘Use vector indexes and Azure OpenAI embeddings in PostgreSQL for retrieval augmented generation.’);
— Semantic search using vector similarity match
SELECT doc_id, doc, doc_vector
FROM docs d
ORDER BY
d.doc_vector <#> azure_local_ai.create_embeddings(‘multilingual-e5-small:v1’, ‘Generate text embeddings in PostgreSQL.’)::vector
LIMIT 1;
— Add a single record to the docs table and the vector embedding using azure_local_ai and locally deployed model will be automatically generated
INSERT INTO docs(doc) VALUES (‘Semantic Search with Azure Database for PostgreSQL – Flexible Server and Azure OpenAI’);
–View all doc entries and their doc_vector column. A vector embedding will have been generated for single record added above.
SELECT doc, doc_vector, last_update FROM docs;
Getting Started
To get started, review the azure_local_ai extension documentation, enable the extension and begin creating embeddings from your text data without leaving the Azure Database for PostgreSQL boundary.
azure_local_ai extension overview
Generate vector embeddings with azure_local_ai extension
vector extension
Learn more about vector similarity search using pgvector
Microsoft Tech Community – Latest Blogs –Read More
What’s new in Azure AI Language | BUILD 2024
Introduction
At Azure AI Language, we believe that language is at the core of human and artificial intelligence. As part of Azure AI that offers a comprehensive suite of AI services and tools for AI developers, Azure AI Language is a service that empowers developers to build intelligent natural language solutions that leverage a set of state-of-the-art language models, including Z-Code++, fine-tuned GPT and more. While LLMs in Azure OpenAI and model catalog are good for general purposes, Azure AI Language provides a set of prebuilt and customizable natural language capabilities that are fine-tuned and optimized for a wide range of scenarios, such as Personal Identifier Information (PII) detection, document and conversation summarization, text analytics for healthcare domain, conversational intent identification, etc., with leading quality and cost efficiency. These capabilities are available through a unified API that simplifies the integration and orchestration of natural language capabilities with no need of complex prompt engineering.
Today, we’re thrilled to announce more new features and capabilities designed to make your workflow more seamless and efficient than ever before at this year’s Microsoft Build with the following key highlights: 1) a unified experience for Azure AI Language in Azure AI Studio and improved integration with prompt flow, 2) improvements in existing prebuilt features such as Summarization, PII and NER, and 3) enhancements in custom features, especially in Conversational Language Understanding (CLU) to provide intent identification and entity extraction with higher quality in more regions.
Azure AI Language now available in Azure AI Studio and prompt flow
As part of Azure AI services, Azure AI Language now supports the new Azure AI service resource type for prebuilt capabilities like summarization, Personally Identifiable Information (PII) detection, and many others. It lets you access all Azure AI services, including Language, Speech and Vision, etc., with one single resource, which makes it easier to integrate the AI capabilities from across Azure AI. In the next few months, we will also support the customization capabilities in Azure AI Language in Azure AI Studio.
We are excited to introduce Azure AI Language in Azure AI Studio with two new playgrounds for you to try out: Summarization and Personally Identifiable Information detection. Both help infuse generative AI into your solutions. In Azure AI Studio, you have more options to try out and explore how to use them effectively for your needs.
Prompt flow in Azure AI Studio is a development tool designed to streamline the entire development cycle of AI applications. We are happy to announce that Language’s prompt flow tooling is now available in Azure AI prompt flow gallery. With that, you can explore and use various natural language processing features from Azure AI Language in prompt flow. You can quickly start to make use of Azure AI Language, reduce your time to value, and deploy solutions with reliable evaluation.
What’s new in prebuilt features in Azure AI Language service
Azure AI Language’s prebuilt capabilities enable customers to set up and running quickly without the need for model training. These prebuilt services are designed to accelerate time-to-value through pretrained models optimized for specific Language AI tasks, including Personally Identifiable Information (PII), Named Entity Recognition (NER), Summarization, Text Analytics for Health, Language Detection, Key Phrase Extraction and Sentiment Analysis and opinion mining, etc.
As we learned a lot of customers want to use Language AI to derive insights from native documents like Word docs and PDFs, to minimize the time and eliminates the need for data preprocessing, we have recently released a public preview of native documents support for PII detection and Summarization service. More file formats and capabilities will be added into the feature towards its GA.
Here is more information regarding what’s new in Azure AI Language’s prebuilt features:
2.1. Announcing GA general availability of Conversational PII
Azure AI Language’s PII service can help to detect and protect an individual’s identity and privacy in both generative and non-generative AI applications which are critical for highly regulated industries such as financial services, healthcare or government. This PII service also supports Protected Health Information (PHI) and Payment Card Industry (PCI) data, and it’s available in 79 languages for around 30 general entity categories and more than 90 region-specific entity categories. By enabling users to identify, categorize, and redact sensitive information directly from complex text files, and native documents in .pdf, .docx and .txt file format, the PII service enables our customers to adhere to the highest standards of data privacy, security, and compliance with only 1 API call.
Today, we are excited to announce the general availability of conversational PII redaction in English-language contexts to further support customers looking to recognize and redact sensitive information in conversations, particularly now in speech transcriptions from meetings and calls for 6 recognized entity categories for conversations. Customers can now redact transcript, chat, and other text written in a conversational style (i.e. text with “um”s, “ah”s, multiple speakers, sensitive info in non-complete sentences, and the spelling out of words for more clarity) with better confidence in AI quality, Azure SLA support and production environment support, and enterprise-grade security in mind.
Conversational PII will be available starting in late June. Please see here for the full list of supported languages for the PII service and here for supported recognized for PII entities for conversation.
2.2. Enhanced address recognition for UK contexts with NER model updates
We are excited to share an updated NER model with improved AI quality and accuracy for both NER and PII detection. This model update will largely benefit location entities (e.g. addresses), finance entities (e.g. bank account numbers), and single letter spell outs where a speaker in a transcript may be spelling out a relevant entity (e.g. “M. I. CRO. S. O. F. and T”) where our new model shows improved F1 scores and decreased false positive recognitions. The updated model will be available starting in late June.
2.3. General availability of Recap summary for conversations in Summarization
Azure AI Language’s Summarization service enables users to extract key points from the textual content and provide a comprehensive summary of documents or conversations. This service is powered by an ensemble of two sophisticated natural language models in which one is specifically trained for text extraction while the other fine-tuned GPT model is further optimized for text summarization without the need of any prompt engineering. In addition, Azure AI Language’s Summarization service comes with built-in hallucination detection capability.
We appreciate customers’ enthusiasm for Azure AI Language’s Summarization service since we announced its general availability last year. Document abstractive summarization and Conversation summarization capabilities are currently available in 6 regions and 11 languages whereas Custom Summarization is available in East US in English language. Please see Summarization region support article for the full list of supported regions, and Summarization language support article for supported languages.
Today, we are excited to announce the general availability of Recap summary for conversations in Azure AI Language service. This recap summary compresses a long conversation into one short paragraph and captures key information, which has been highly praised by preview customers, especially for many high-volume call center customers. Check out our product document to learn more about the key features in conversation summarization.
What’s new in custom features in Azure AI Language service
Azure AI Language’s custom capabilities empower customers to customize their multilingual machine learning models based on a few labeled examples according to their specific use case. These custom service include but are not limited to Custom Text Classification, Custom Named Entity Recognition (NER), and Conversational Language Understanding (CLU). Powered by the state-of-the-art transformer models, Azure AI Language’s custom multilingual models can be trained in one language and used for multiple other languages. In addition to custom features in Azure AI Language service, the advanced low-touch customization capability in Azure AI Language now also powers Azure AI Content Safety’s Custom Category feature for custom content moderation.
As part of custom services in Azure AI Language, Conversational Language Understanding (CLU) enables reliable conversational AI experience with intent identification and entity extraction. Today, we are excited to announce three new features in CLU as follows:
Enhanced support for CLU applications to automate training data augmentation for diacritics
Today, we are introducing a suite of improvements to increase the AI quality of your CLU apps. Many customers already enjoy our training configuration that allows customers to train in one language and use the app in 100+ languages. Since many customers around the world use English keyboards to type in Germanic and Slavic languages, it can be more difficult to classify the utterance into the correct intent without diacritic characters. Because of this, we’re excited to announce a new feature that allows you to automate the training data augmentation for diacritics. When this setting is enabled in your CLU project, CLU will automatically augment your training dataset to reduce the model’s sensitivity to diacritic characters.
Derive more insights from additional granular entities in CLU applications
Many of our customers enjoy the ease of leveraging prebuilt entity recognition, like location, in their custom models. However, it can be helpful to know even more information about an entity phrase. We are excited to introduce more granular entities in CLU. So, for an utterance such as “New York”, you can now recognize more than just location, but also additional details such as city or state. Check out CLU supported prebuilt entity components for a full list of support prebuilt entities.
Improved CLU training configuration to address CLU model scoring inconsistencies
We have released a new CLU training configuration that is designed to address scoring inconsistencies, especially related to managing confidence scores and ‘None’ intent classification for off-topic utterances. We are excited to see how this new training configuration (available in 2024-06-01-preview via REST API) improves your model’s performance.
Availability of CLU authoring service in Azure US Government cloud
As our government and defense customers expand their use of conversational AI, the need for Azure AI in government-compliant clouds has grown, so we are announcing that CLU authoring service is now available in the Azure US Government cloud. This means that you can build, manage, and deploy your custom CLU models for government use cases with the same ease and functionality as in the public cloud.
We are looking forward to seeing how these new CLU capabilities will provide you with more flexibility and control, as you develop conversational AI solutions in your enterprise.
Summary
We look forward to seeing our customers use these capabilities to enhance productivity, summarize insights, protect data privacy and build intelligent chat experiences based on content in natural language. As always, Azure AI Language team remains committed to delivering innovative solutions that enable our customers to achieve their goals. We welcome your feedback as we strive to continuously improve and evolve our services with state-of-the-art AI models to offer the best managed and compliant natural language processing capabilities to our customers in Azure AI Language service.
Learn more about Azure AI Language in the following resources:
Azure AI Language homepage: https://aka.ms/azure-language
Azure AI Language product documentation: https://aka.ms/language-docs
Azure AI Language product demo videos: https://aka.ms/language-videos
Explore Azure AI Language in Azure AI Studio: https://aka.ms/AzureAiLanguage
Prompt flow in Azure AI Studio: https://learn.microsoft.com/en-us/azure/ai-studio/how-to/prompt-flow
Native document support for PII and Summarization: https://aka.ms/language-native-docs-support
Conversational PII detection: https://aka.ms/conversational-pii
Summarization overview: https://aka.ms/summarization-docs
Conversational Language Understanding overview: https://aka.ms/language-clu
Microsoft Tech Community – Latest Blogs –Read More
Developing AI-enhanced apps of the future with Microsoft’s adaptive cloud approach
As our annual Build conference is about to kick off this week, I’m thrilled to share several product announcements to empower developers to take advantage of Azure’s adaptive cloud approach: Edge Storage Accelerator public preview, Azure Monitor pipeline public preview, Secrets Sync Controller private preview, Jumpstart Agora for Manufacturing general availability, Jumpstart Drops public preview, Visual Studio Code Extension public preview.
There has never been a more exciting time to be an application developer. With cloud native practices and hyperscale cloud services increasingly available at the edge, developers can access data, build for environments and extend to use cases previously unavailable to them. At the same time AI advances are driving efficiency into the application development process and enabling the creation of innovative industry solutions.
However, to take advantage of this progress, developers and adjacent teams need to manage the challenges stemming from legacy systems, heterogeneous environments, fragmented data and lack of standardization. The need for a unified platform and system to achieve this potential and overcome these obstacles becomes increasingly evident. We believe Azure is the platform that can help, and we have been investing in Azure Arc to solve these problems. We see an opportunity to do more by bringing together agility and intelligence so that our customers can proactively adapt to change. This is what we refer to as our adaptive cloud approach.
This approach has enabled customers like US-based DICK’S Sporting Goods to re-imagine its customer experience and implement a “one store” strategy where they can write, deploy, manage and monitor software across all 800+ locations nationwide. Similarly, Coles, an Australian supermarket retailer, has embraced AI-driven solutions for inventory management, personalized shopping experiences, loss prevention and more.
“Win-win solutions are those where we are helping our team members and our customers at the same time. Our technological investments into operational efficiency have translated into real, tangible benefits for our shoppers.”
– Silvio Giorgio, GM of Data & Intelligence at Coles Group
The AI-infused developer opportunity
One of the key principles of our Adaptive cloud approach is Kubernetes everywhere, providing the same scalability and agility developers expect with their cloud solutions, when they build for the edge. Azure Arc, our solution for consistent multi-cloud and on-premises management, works with any CNCF-certified Kubernetes clusters including our first-party Azure Kubernetes Service to enable applications developers to build and run software seamlessly across the cloud and edge. As a result, developers can focus on the application itself instead of worrying about where and how it is going to run across their company’s physical footprint.
The starting point for developers to begin building distributed applications is the same toolset they currently use now, powered by recent releases and improvements. GitHub Actions gives developers the ability to automate, customize, and execute their software development workflows in their GitHub repository. GitHub Copilot will further speed their development of edge solutions with coding suggestions, help solving problems and more.
These tools, combined with Flux and Azure Container Registry, complete the GitOps workflow for consistent and efficient application rollouts across cloud to edge environments.
Distributing software updates via GitOps
DevOps and beyond
There is, however, a lot more to building and scaling applications across boundaries than Arc-enabled Kubernetes and GitOps workflows can deliver alone. DevOps teams need to create pipelines for deployment, testing, and monitoring applications. They want to manage network connectivity, automate application security, deploy and manage infrastructure as code (IaC) components and maintain the overall container orchestration layer.
To support these requirements, we are building a robust set of foundational services that will be available natively and fully supported by via Azure Arc. Once you integrate Azure Arc, these services will be available on the clusters for applications to take dependencies on and use. In terms of these foundational services, we have recently announced the release of Edge Storage Accelerator, and Secrets Sync Controller (details below), with other announcements coming soon.
Foundational Services
Solution orchestration for the edge
The environments that edge applications operate in are heterogeneous and diverse, causing challenges like not having a single programming interface (API), for developers and engineers that are trying to stitch together a larger solution (a factory solution, a software defined vehicle, etc.). To help solve this Microsoft is investing in the Eclipse Foundation Symphony project. Symphony is a platform-independent “orchestrator for orchestrator” engine, allowing solution providers to declare a single deployment manifest for various endpoint deployments. Symphony then ingests the deployment manifest, orchestrates the various orchestration platforms, such as Kubernetes, Linux Shell, Windows and returns feedback whether the deployment was successful. We welcome ecosystem contributions to this project.
Getting the most out of the Adaptive Cloud Ecosystem
While many of our customers decide to develop edge applications themselves, many if not all also purchase solutions from third parties. The specific types of applications differ by industry but there are two key partner types that play a major role in customer edge solutions.
Independent Software Vendors (ISVs)
ISVs play a critical role in providing 3rd-party edge solutions for customers. To ensure that an ISV’s solution can run on Arc-enabled Kubernetes we have created the Azure Arc ISV partner program, a technical validation of the partner’s solution on the platform. Isovalent, Hashicorp and Intel are examples of partners that have completed the program.
ISVs can also publish their containerized applications on the Azure Marketplace as a Kubernetes app for deployment on Arc enabled Kubernetes clusters. Kubernetes apps provide flexible billing options to enable ISVs to charge customers through the Azure Marketplace.
System Integration (SI) partners
For custom solution development or simply help with deployment of an application developed in-house, customers typically employ an SI. We work with an active ecosystem of SIs that are versed in modern application development, deployment and management practices. Partners like Avanade and Maibornwolff are good examples of SIs making an impact for customers with Kubernetes-based application development and deployment at the edge.
“For us, the easy deployment and monitoring of ML models from Azure ML in Kubernetes clusters at the edge is THE game-changing feature of Azure Arc – alongside the ability to use Azure IoT Operations. Both capabilities are essential when we build hybrid cloud smart factory platforms based on Azure technologies.”
– Marc Jäckle, Technical Head of IoT at MaibornWolff
“Azure Arc has enabled us to bring Cloud native services to the Edge of our client’s Industrial solutions without increasing the complexity and effort to manage this fleet of devices that are used to control the shop floor in digital operations scenarios. Having a Standards based execution environment like Kubernetes available to run custom workloads at the Edge or in the Cloud is a big benefit for our customers. Azure and especially Azure Arc fully support these deployments.”
-Juergen Mayrbaeurl, Senior Director at Avanade
Announcements
Ways to help build resilient, observable and secure applications at the edge
Edge Storage Accelerator public preview – At the edge, Kubernetes storage capabilities vary in durability, persistence, and performance, posing a challenge for customers seeking reliable solutions. To address these challenges, we recently introduced Edge Storage Accelerator (ESA), a storage system designed for Arc-connected Kubernetes clusters. ESA offers fault-tolerant, highly available cloud-native persistent storage, empowering customers to confidently host stateful applications, custom apps, and other Arc extensions with ease and reliability. Through standard Kubernetes APIs, users can effortlessly attach containerized applications managing file data stored on Azure Blob storage, leveraging its limitless cloud storage capacity for edge applications. ESA’s flexible deployment options, simplified connection via a Container Storage Interface (CSI) driver, and platform neutrality transforms edge storage solutions, alleviating customer pain points and enabling seamless operations at the edge.
Azure Monitor pipeline public preview – As enterprises scale their infrastructure and applications, the volume of observability data naturally increases, and it is challenging to collect telemetry from certain restricted environments. We are extending our Azure Monitor pipeline at the edge to enable customers to collect telemetry at scale from their edge environment and route to Azure Monitor for observability. With Azure Monitor pipeline at edge, customers can collect telemetry from the resources in segmented networks that do not have a line of sight to cloud. Additionally, the pipeline prevents data loss by caching the telemetry locally during intermittent connectivity periods and backfilling to the cloud, improving reliability and resiliency.
Secret Sync Controller private preview – Customers want the confidence and scalability that comes with unified secrets management in the cloud, while maintaining disconnection-resilience for operational activities at the edge. To help them with this, the new Secret Synchronization Controller for Kubernetes automatically synchronizes secrets from an Azure Key Vault to a Kubernetes cluster for offline access. This means customers can use Azure Key Vault to store, maintain, and rotate secrets, even when running a Kubernetes cluster in a semi-disconnected state. Synchronized secrets are stored in the cluster secret store, making them available as Kubernetes secrets to be used in all the usual ways—mounted as data volumes or exposed as environment variables to a container in a pod.
Exciting ways to engage and get started with Jumpstart and VSCode
Jumpstart Agora for Manufacturing general availability – Customers want interactive test environments that cover real industry scenarios to learn more about what Azure Arc and other Azure technologies can help them accomplish for their business. Jumpstart Agora for Manufacturing is a set of comprehensive cloud-to-edge scenarios brought to life through the story of Contoso Motors and its solutions for digital innovation and employee safety. Users will learn how to deploy and interact with the technology behind Contoso Motor’s quality optimization, AI hazard detection, defect detection and IT/OT observability and control solutions. https://aka.ms/JumpstartAgoraMotorsBlog
Jumpstart Drops public preview – Azure Arc Jumpstart contributors want a unified, accessible and shareable repository for scripts, sample apps, libraries, dashboards, automations or comprehensive tutorials useful in the testing and deployment of Azure Arc-enabled solutions. Jumpstart Drops is a new page on the Jumpstart website that enables users to search for and use pre-built code and artifacts of all types. Users can filter their search by scenarios (Edge/Cloud), tools/languages, tags, code owner and more. Jumpstart Drops also includes a defined template for making contributions and giving back to the community. Embracing an open-source ethos, all contributions are licensed under MIT License. So, dive in, explore the collection of amazing Drops already available, and join us and the community as we share knowledge. https://aka.ms/JumpstartDropsBlog
Visual Studio Code extension public preview – Developers want a single pane of glass and workbench to complete the entire developer workflow for Arc-enabled applications. We released an Arc Visual Studio Code extension in public preview for Arc and AKS which has sample code to access these services, a local environment to test and debug the services and an environment in the cloud to test at a larger scale. The extension provides a one-stop shop for developers and helps accelerate development for both workloads that will run on the edge and that are going to be published on the Azure Marketplace.
Together these resources offer the perfect starting point to learn about industry-specific adaptive cloud approach solutions, find code snippets or contribute to the Jumpstart Drops repository and get started with edge application development. To learn more about these and other exciting offerings that support our adaptive cloud approach please join us in-person or virtually at Microsoft Build.
Here is a list of our sessions. You can also find us on the 5th floor of the convention center at the adaptive cloud approach and community demo stations (within the Expert Meet-Up area).
Breakout session BRK126 | Adaptive cloud approach: Build and scale apps from cloud to edge
Breakout session BRKFP292 | AI Everywhere – Accelerate your development from edge to cloud
Breakout session BRK127 | Azure Monitor: Observability from Code to Cloud
Demo session DEM172 | Next-gen monitoring on Azure
Lab | Taking Azure Kubernetes out of the cloud and into your world (Tuesday/Wednesday/Thursday)
On-demand session OD545 | What’s new in Azure Monitor?
On-demand session OD540 | Improve Application Resilience Using Azure Chaos Studio
To read more about Azure’s adaptive cloud approach here are some of our latest blogs:
Advancing hybrid cloud to adaptive cloud with Azure | Microsoft Azure Blog
Harmonizing AI-enhanced physical and cloud operations | Microsoft Azure Blog
Hannover Messe 2024: Scaling Industrial Transformation with Azure’s Adaptive Cloud Approach – Microsoft Community Hub
Microsoft Tech Community – Latest Blogs –Read More
Build 2024: Azure AI Video Indexer integration with language models for textual video summary
We are thrilled to introduce textual video summarization for recorded video and audio files, powered by large and small language models (LLM and SLM).
AI application developers can leverage APIs to create textual summaries for audio and video files, anywhere.
Data analysts, instead of watching entire videos, can benefit from concise summaries of video and audio content and adjust it to their needs.
Azure AI Video Indexer, a cloud and edge video solution, enables textual video summarization with the following build announcements:
Preview at the cloud: Textual video summarization in Azure AI Video Indexer powered by Azure Open AI
The feature of textual video summarization in Azure AI Video Indexer, cloud edition is powered by Azure Open AI. This innovative addition allows customers who have created an AOAI resource in Azure, to seamlessly integrate it with Video Indexer. By leveraging deployments such as GPT4, users can now enjoy concise textual summaries of their videos, presented as an insightful extract alongside the player page. The video summary not only enhances the viewing experience but also empowers video analysts to tailor the summary’s nuances and to align with specific business requirements.
The summary that encapsulates the essence of the video content, utilizing not only the transcript but also additional elements derived from the visual and audio aspects of the video like a siren and crowd reactions in the background, or any visual text that appear on the screen like signs, text, visual objects and more.
Preview at the edge (on premise): Extend Azure AI Video Indexer enabled by Arc with integration with SLM through Phi3
The preview version of Azure AI Video Indexer enabled by Arc now includes integration with SLM through Phi3. The innovation containerizes both the Azure AI and Phi3 models, providing video analysts the ability to perform video summarization. It represents a significant stride in our generative AI capabilities utilizing the cutting-edge Phi3 model at the edge. The Phi3 model opens new avenues for AI applications, especially in settings where computing resources are limited, by offering a more streamlined and efficient approach to video analysis.
The Phi3 model, developed in line with Microsoft’s Responsible AI principles and trained on high-quality data, is a testament to our dedication to safety and excellence in AI. It’s a lightweight, state-of-the-art model designed for long-context support, making it ideal for generating responsive and relevant text in chat formats.
Use cases for video summarization across industries
In education, summarized videos can serve as study aids, allowing students to review lecture content quickly. The capability can distill lengthy training videos into key takeaways, saving employees’ time and improving knowledge retention, e.g., in corporate trainings.
In media, it helps in quickly understanding the content of large video libraries, like movies or series, without watching the entire footage. This can be particularly useful for editors and content creators who need to create promos or trailers.
In manufacturing, summarized videos can serve as training material or evidence of compliance with regulatory standards and can quickly highlight parts of footage where potential quality issues are detected on the production line.
Retailers can use video summaries to understand customer traffic patterns and preferences without watching hours of footage.
In modern safety, textual summaries can pinpoint instances of theft or suspicious behavior, streamlining the review process for security teams, enhance the review process of training exercises, identifying key moments for analysis and improvement.
Watch the demo recording to learn more:
Video summarization flavors and customization
Video analysts utilizing the summarization feature will appreciate the added flexibility of feature customization. Tailor your summaries to meet specific needs with selectable options such as “Shorter” for concise overviews, “Longer” for detailed accounts, “Formal” for professional contexts, and “Casual” for a more relaxed tone. This personalized approach ensures that your summaries align perfectly with your intended audience and purpose.
How to make it available in my Azure AI Video Indexer account?
Use Textual Video Summarization in Your Public Cloud Environment:
If you already have an existing Azure Video Indexer account, follow these steps to use the video summarization:
Create an Azure Open AI resource in your subscription.
Connect your Azure Open AI resource to your Video Indexer resource in the Azure Portal.
Go to Azure Video Indexer portal, select a video and choose “generate summary”.
For detailed instructions on how to set up this integration, refer to this guidance . Please note that this feature is not available in Video Indexer trial accounts or on legacy accounts which uses Azure Media services. Leverage this opportunity also to remove your dependency on Azure Media services by following these instructions.
Use Textual Video Summarization in Your Edge Environment, enabled by Arc:
If your edge appliances are integrated with the Azure Platform via Azure Arc, you’re in for a treat! Here’s how to activate the feature:
Register for Video Indexer (VI) enabled by Arc using this form. Rest assured, we are dedicated to activating the Azure AI Video Indexer Arc-enabled extension in your Video Indexer account within 30 days of your request. of your request.
Once activated, create an Azure AI Video Indexer service extension by adhering to these guidelines.
Navigate to the Azure Video Indexer portal, select a video, and click on “Generate Summary” to see the magic happen.
Our Video-to-text API (aka Prompt Content API) now also support Llama, Phi2 and GPTv4
The prompt content API, that converts video to text based on video Indexer’s extracted insights, now supports additional models: Llama, Phi2 and GPTv4. It provides more flexibility when converting video content to text. To learn more about this API, refer to this API documentation.
Read More
About the feature
Video summarization: Public feature documentation
Transparency note
Prompt content: Video-to-text API
About Azure AI Video Indexer
Use Azure AI Video Indexer website to access product website
Get started with Azure AI Video Indexer, Enabled by Arc by following this Arc Jumpstart scenario
Visit Azure AI Video Indexer Developer Portal to learn about our APIs
Search the Azure Video Indexer GitHub repository
Review our product documentation.
Get to know the recent features using Azure AI Video Indexer release notes
Use Stack overflow community for technical questions.
To report an issue with Azure AI Video Indexer, go to Azure portal Help + support. Create a new support request. Your request will be tracked within SLA.
For any other question, contact our support distribution list at visupport@microsoft.com
Microsoft Tech Community – Latest Blogs –Read More
Microsoft List branching is this possible
Hi,
Is there an option or away of using Branching when creating a MS Form from the new SharePoint List route??
I know it’s an option when creating a directly from the MS Forms app
Hi,Is there an option or away of using Branching when creating a MS Form from the new SharePoint List route??I know it’s an option when creating a directly from the MS Forms app Read More
Pop-up window announcement in Microsoft Teams
This morning, when I logged into Microsoft Teams, I noticed a pop-up window announcing the Microsoft Teams Public Preview & Targeted Release. It seems this notification may have also been displayed to other users. We are planning to implement a retention policy for Teams chats and would like to distribute similar information to all our Microsoft Teams users. While I am familiar with creating Teams and posting announcements within a channel, I am looking for a way to share this message without setting up a new team or channel. If anyone has experience with this feature or something similar, your insights would be greatly appreciated.
This morning, when I logged into Microsoft Teams, I noticed a pop-up window announcing the Microsoft Teams Public Preview & Targeted Release. It seems this notification may have also been displayed to other users. We are planning to implement a retention policy for Teams chats and would like to distribute similar information to all our Microsoft Teams users. While I am familiar with creating Teams and posting announcements within a channel, I am looking for a way to share this message without setting up a new team or channel. If anyone has experience with this feature or something similar, your insights would be greatly appreciated. Read More
How to fetch / filter users from AD faster using Get-ADUser command.
Recently I saw few scripts which are fetching users from AD like below mentioned.
Get-ADUser -LDAPFilter “(whenCreated>=$date)”
or
Get-ADUser -filter {Enabled -eq $True -and PasswordNeverExpires -eq $False -and PasswordLastSet -gt 0}
or
Get-ADUser -Filter ‘Enabled -eq $True’
But using like above is taking quite a lot of time or sometimes giving Timeout error.
is there any way can make this faster? Will using –LDAPFilter instead of -Filter make it faster?
Error Message: The operation returned because the timeout limit was exceeded.
Recently I saw few scripts which are fetching users from AD like below mentioned. Get-ADUser -LDAPFilter “(whenCreated>=$date)”orGet-ADUser -filter {Enabled -eq $True -and PasswordNeverExpires -eq $False -and PasswordLastSet -gt 0}orGet-ADUser -Filter ‘Enabled -eq $True’ But using like above is taking quite a lot of time or sometimes giving Timeout error.is there any way can make this faster? Will using -LDAPFilter instead of -Filter make it faster? Error Message: The operation returned because the timeout limit was exceeded. Read More
Semantic search in Azure AI Studio
Hi everyone
I’ve setup an Azure AI Search index that points to a SharePoint library and connected this to Azure AI Studio so a Chat-GPT model can be used to query the documents in the library. This all works fine if I use the Keyword search type in the chat playground, but if I change this to Semantic I get the following error:
Semantic Ranker is enabled in my Azure AI Search service instance and all works fine when tested in the search settings in the Azure portal. The error isn’t giving me any further information so I’m not quire sure where I can go from here?
Any assistance would be gratefully received.
Thanks in advance.
Hi everyone I’ve setup an Azure AI Search index that points to a SharePoint library and connected this to Azure AI Studio so a Chat-GPT model can be used to query the documents in the library. This all works fine if I use the Keyword search type in the chat playground, but if I change this to Semantic I get the following error: Semantic Ranker is enabled in my Azure AI Search service instance and all works fine when tested in the search settings in the Azure portal. The error isn’t giving me any further information so I’m not quire sure where I can go from here? Any assistance would be gratefully received. Thanks in advance. Read More
Windows 11 Notifications
hello all,
hoping someone might be able to help. We are looking for a way to stop our users from being able to change the Notification settings on our windows 10 & 11 devices (eg, not be able to turn off, change what notifications are allowed and which ones are not etc)
we are hoping there may be a way via the registry, Group Policy or a config profile in Intune though we have had a look but cant find anything.Windows 11, notifications
many thanks
hello all,hoping someone might be able to help. We are looking for a way to stop our users from being able to change the Notification settings on our windows 10 & 11 devices (eg, not be able to turn off, change what notifications are allowed and which ones are not etc)we are hoping there may be a way via the registry, Group Policy or a config profile in Intune though we have had a look but cant find anything.Windows 11, notificationsmany thanks Read More
Using OR in a formula
I need a formula to give 3 different answers based on the value of one cell in a worksheet that could change.
J19 is the variable cell in my worksheet. The value of (J9-J20) may be a positive or negative number and then I need a value in J21 based on positive or negative. If positive, I need the sum. If negative, I need the cell to be 0.
If J20 is zero I need the value to be the sum of another cell J23
These are the 3 formulas that will give the correct answers but I need it to be an OR to each formula to get the correct answer in cell J25
J25
=IF(J20>J9,J9,J9-J20)+J22+J23+E18 works if the deductible is LARGE
=IF(J20>J9,J9,J9-J20)+J22+J23+E18 works if the deductible is smaller than the charges
=IF(j20=0,J23)+E18 works if the deductible is zero and there is a copay
Is this even possible to solve for?
Thank you,
Donna
I need a formula to give 3 different answers based on the value of one cell in a worksheet that could change.J19 is the variable cell in my worksheet. The value of (J9-J20) may be a positive or negative number and then I need a value in J21 based on positive or negative. If positive, I need the sum. If negative, I need the cell to be 0.If J20 is zero I need the value to be the sum of another cell J23 These are the 3 formulas that will give the correct answers but I need it to be an OR to each formula to get the correct answer in cell J25J25=IF(J20>J9,J9,J9-J20)+J22+J23+E18 works if the deductible is LARGE=IF(J20>J9,J9,J9-J20)+J22+J23+E18 works if the deductible is smaller than the charges=IF(j20=0,J23)+E18 works if the deductible is zero and there is a copay Is this even possible to solve for?Thank you,Donna Read More
Announcing general availability of real-time diarization
We are excited to announce Generally Available of real-time diarization which is an enhanced add-on feature of Azure Speech service. With this feature, you can get live (in real time) speech to text transcription by speakers (Guest1, Guest2, Guest3, etc.), so that you know which speaker was speaking a particular part transcribed speech conversation transcription.
What’s Real-time Diarization
The diarization is a feature that differentiates speakers in an audio. Real-time diarization is capable of distinguishing speakers’ voices through single channel audio in streaming mode. Diarization combined with speech to text functionality can provide transcription outputs that contain a speaker entry for each transcribed segment. The transcription output is tagged as GUEST1, GUEST2, GUEST3, etc. based on the number of speakers in the audio conversation. Below graph demonstrates the difference between the transcription results with and without diarization.
Use Cases and Scenarios
Real-time diarization can be used in a wide range of scenarios. Below lists some typical use cases. It can also be used to help with accessibility scenarios.
Live Conversation/Meeting Transcription
When speakers are all in the same room with a single microphone setup, do live transcription about which speaker (e.g. Guest-1, Guest-2, or Guest-3) talks about what transcription. Combined with GPT based on the diarized transcription, you can also do meeting/conversation summary, recap, or ask questions about the conversation/meeting, etc.
Microsoft Teams, for instance, is leveraging the diarization featrue to show live meeting transcription in Teams. Based on the meeting transcription, Microsoft Teams’ Copilot provides a meeting summary, recap, and many other cool features for people to interact Teams’ Copilot about the meetings.
Real-time Agent Asist
Use Speech Analytics (which is another new feature that Azure Speech Service provides at Build) with real-time diarization, you can do the live transcription analytics to help on the Agent Asist scenarios to optimally address the customers questions and concerns.
Live Caption and Subtitle (Translated Caption)
Show live captions or subtitles (translated captions) of meetings, videos, or audios.
What’s Improved Since Public Preview
After the public preview, we put in a lot of effort to improve the diarization quality. This is the major feedback we heard from Preview users regarding the quality of real-time diarization. We released a new diarization model and improved diarization quality by ~3% on WDER. In addition, we removed the limitation of 7 seconds of continuous audio data from a single speaker. In the Preview version, when a speaker first talks, the diarization would start to perform with better quality after the 7 seconds of continuous audio of the speaker. Now in GA version, we don’t have this limitation anymore.
Early Adopters from Diverse Area
So far, we have over a thousand customers from diverse industries trying out real-time diarization on a variety of scenarios. Below are some examples.
Medical
Live transcription between doctor and patient, and transcription analytics
Banking
Live meeting transcription
Telecommunication
Conversation transcription, summarization, transcription analytics
Legal
App to assist trial and appellate attorney who are preparing for oral arguments (e.g. capture the attorneys’ and judges’ positions during mock oral arguments, etc.)
Try it Out
To try out the real-time diarization, you can go to Speech Studio (Speech Studio – Real-time speech to text (microsoft.com)) and do the following steps (shown in the below screenshot) to experience the feature,
Click on “Show advanced options”.
Use the “Speaker diarization” toggle to turn on or off the real-time diarization.
Real-time diarization is available to all the regions that Azure Speech Service supports. It is released through Speech SDK (version 1.31.0 or higher). The feature is available in the following SDKs.
C#,
C++
Java
JavaScript
Python
Please feel free to follow the Quickstart: Real-time diarization to start experiencing the feature.
Microsoft Tech Community – Latest Blogs –Read More
No access to reservationpage
Best,
1 of me co workers has access to 4 reservation page in bookings with admin rights. When she accesses a reservation page she keeps getting the message that she has no permissions to the reservation page and that she has 3 out of 4 reservation page no access to it.
i have already done the following:
– cleared browser history.
– cleared cache.
– assigned the same permissions to each reservation page.
– tried on a mobile device.
But every step above has not helped anything.
Can anyone help me to get the persistent problem solved.
Regards,
Robby
Best,1 of me co workers has access to 4 reservation page in bookings with admin rights. When she accesses a reservation page she keeps getting the message that she has no permissions to the reservation page and that she has 3 out of 4 reservation page no access to it. i have already done the following:- cleared browser history.- cleared cache.- assigned the same permissions to each reservation page.- tried on a mobile device. But every step above has not helped anything.Can anyone help me to get the persistent problem solved. Regards, Robby Read More
Teams does not manage properly External Monitor on iPad
I’ve an iPad Air 5 that supports display to external HDMI monitor through usb-c port.
When i configure the external monitor as extendend display (not mirrored), Teams seems unable to manage properly that configuration. More in detail i’ve observed the following issues:
1. When Teams is already open, switching to the external monitor has the effect that it’s not possible to join meetings (tapping/clicking on Join meeting has no effect)
2. closing the application and re-opeing it (with external monitor connected) sometimes let to join the meeting, but the app becomes unusable because the meeting window is shown in the external monitor as very small window (and no other apps can apparently co-exist with it), while the “main” Teams application is on iPad dispaly. When the main Teams application is moved to external monitor, the “meeting” window disappears
This is annoying, each time i’ve to join a meeting i’ve to detach the cable connection to external monitor if i want to run the meeting properly…
I’ve an iPad Air 5 that supports display to external HDMI monitor through usb-c port. When i configure the external monitor as extendend display (not mirrored), Teams seems unable to manage properly that configuration. More in detail i’ve observed the following issues: 1. When Teams is already open, switching to the external monitor has the effect that it’s not possible to join meetings (tapping/clicking on Join meeting has no effect)2. closing the application and re-opeing it (with external monitor connected) sometimes let to join the meeting, but the app becomes unusable because the meeting window is shown in the external monitor as very small window (and no other apps can apparently co-exist with it), while the “main” Teams application is on iPad dispaly. When the main Teams application is moved to external monitor, the “meeting” window disappearsThis is annoying, each time i’ve to join a meeting i’ve to detach the cable connection to external monitor if i want to run the meeting properly… Read More
How to export all sheets as separate files: sheetName.pdf from workbook?
Hello, we’re using Microsoft Excel for mac version 16.84. We can create workbooks with sheets. But, we cannot see how to export all workbook sheets as separate pdfs documents, with their names as file names.
It is possible to export the whole workbook as a pdf and then drag the individual pages out as their own .pdf docuuments; but they are saved as 1(dragged).pdf 2(dragged.pdf) etc. We lose the name.
Has anybody else had this issue? Is there any way to export them with their names, as in previous versions of the software?
Thanks all.
Hello, we’re using Microsoft Excel for mac version 16.84. We can create workbooks with sheets. But, we cannot see how to export all workbook sheets as separate pdfs documents, with their names as file names. It is possible to export the whole workbook as a pdf and then drag the individual pages out as their own .pdf docuuments; but they are saved as 1(dragged).pdf 2(dragged.pdf) etc. We lose the name.Has anybody else had this issue? Is there any way to export them with their names, as in previous versions of the software? Thanks all. Read More
Daily Agenda Mail
Hello,
We are using the new Outlook App and are wondering where the option “Receive daily agenda e-mail” is.
Has this feature been removed or where can I find it?
Hello,We are using the new Outlook App and are wondering where the option “Receive daily agenda e-mail” is.Has this feature been removed or where can I find it? Read More
Microsoft Security Development Lifecycle (SDL)
Security and privacy should never be an afterthought when developing software. A formal process must become standard practice to ensure they are considered at all points of the product’s lifecycle. The rise of software supply chain attacks—including the XZ Utils, SolarWinds attack and Log4j vulnerabilities—highlights the critical need to build security into the software development process, from the ground up.
Over the last 20 years, there have been many improvements to the security development lifecycle (SDL) reflecting changes in internal tools and processes. We are excited to announce that this week, we have updated the security practices on the SDL website, and we will continue to update this site with new information on a regular basis.
Microsoft Security Development Lifecycle (SDL) Timeline
In the early 2000s, personal computers (PCs) were becoming increasingly common in the home and the internet was gaining more widespread use. This led to a rise in malicious software looking to take advantage of users connecting their home PCs to the internet. It quickly became evident that protecting users from malicious software required a fundamentally different approach to security.
In January 2002, Microsoft launched its Trustworthy Computing initiative to help ensure Microsoft products and services were built to be inherently highly secure, available, reliable, and with business integrity.
In 2004, the Microsoft Security Development Lifecycle (SDL) was born out of the Trustworthy Computing initiative and introduced security throughout all phases of software development at Microsoft. The SDL began life to bake security and privacy principles into the culture of Microsoft. It originally consisted of a relatively small set of requirements aligned to each phase of the waterfall model of software development, aimed at preventing developers from inadvertently introducing vulnerabilities into their code. It also included a few supporting tools that could identify what was, at the time, a short list of known issues. Back then, the SDL was updated annually. Products were released every two to three years and a final security review to confirm that best practices had been followed was a great advancement from existing approaches.
We no longer live in a world where software releases are months or even years apart. The cloud and continuous integration/continuous deployment (CI/CD) practices, enable services to be shipped daily or sometimes multiple times a day. The software supply chain has grown more complicated as more dependencies on open-source software are created. And while the SDL has continued to evolve to keep up with these changes and the shifting threat landscape, it has also grown more complex.
SDL Now
Secure software development still requires embedding security into each step of the development process, from the design and build stages to deployment and operations(run). The SDL now continuously measures security throughout the development lifecycle. SDL continues to evolve with the changing landscape of cloud computing, AI, and CI/CD automation. As seen in the image below, security controls are integrated to ensure continuous enforcement of zero trust principles and governance from Design stage all the way to Run.
The image below shows key security capabilities in each of the stages of the development lifecycle.
The SDL is the approach Microsoft uses to integrate security into DevOps processes (sometimes called a DevSecOps approach). You can use this SDL guidance and documentation to adapt this approach and practices to your organization.
The practices described in the SDL approach can be applied to all types of software development and all platforms from classic waterfall through to modern DevOps approaches and can be generally applied across:
Software – whether you are developing software code for firmware, AI applications, operating systems, drivers, IoT Devices, mobile device apps, web services, plug-ins or applets, hardware microcode, low-code/no-code apps, or other software formats. Note that most practices in the SDL are applicable to secure computer hardware development as well.
Platforms – whether the software is running on a ‘serverless’ platform approach, on an on-premises server, a mobile device, a cloud hosted VM, a user endpoint, as part of a Software as a Service (SaaS) application, a cloud edge device, an IoT device, or anywhere else.
The SDL recommends 10 security practices to incorporate into your development workflows. Applying the 10 security practices of SDL is an ongoing process of improvement so a key recommendation is to begin from some point and keep enhancing as you proceed. This continuous process involves changes to culture, strategy, processes, and technical controls as you embed security skills and practices into DevOps workflows.
Next steps
Head over to the updated SDL site and start adapting the SDL guidance and practices to your organization.
Microsoft Tech Community – Latest Blogs –Read More
File properties information
Hi all,
I’m looking for the ways to scan the repo to get the file properties information in any environment using Microsoft solutions.
File properties information such as file name, file type, size, owner, last modified etc
Regards
Aaron
Hi all,I’m looking for the ways to scan the repo to get the file properties information in any environment using Microsoft solutions.File properties information such as file name, file type, size, owner, last modified etc RegardsAaron Read More
Policy personal data text for each service
Hello:
I am building a Bookings page where customers can book different services. I use custom fields to get extra information. For legal reasons, I must include a specific policy personal data text in each service, different in each case. How can I add this text to each service, underneath the custom fields?
Thank you very much.
Hello: I am building a Bookings page where customers can book different services. I use custom fields to get extra information. For legal reasons, I must include a specific policy personal data text in each service, different in each case. How can I add this text to each service, underneath the custom fields? Thank you very much. Read More