Month: October 2024
No longer able to update Hyperlink column values.
Since Microsoft has chosen to upend the usability of (previously working perfectly great) SharePoint Lists and we have yet to find a suitable alternative, we’re stuck trying to apply band-aids every day. Today, I’m struck with the issue of no longer being able to edit the value for a hyperlink column.
We’re using this hyperlink column to show employee photos. The file (jpg) is in a shared folder in SharePoint. When in Edit Grid mode and I select the cell to edit, my browser (Edge) just opens a new blank tab. I can not edit a single item because the form was reformatted in PowerApps and there’s no longer an option to edit this form.
The formatting for the column is below. The option to format the url is picture.
{“$schema”:” https://developer.microsoft.com/json-schemas/sp/v2/column-formatting.schema.json“,”elmType”:”a”,”attributes”:{“href”:”@currentField”,”target”:”_blank”},”children”:[{“elmType”:”img”,”style”:{“width”:”150px”},”attributes”:{“src”:”@currentField”}}]}
Since Microsoft has chosen to upend the usability of (previously working perfectly great) SharePoint Lists and we have yet to find a suitable alternative, we’re stuck trying to apply band-aids every day. Today, I’m struck with the issue of no longer being able to edit the value for a hyperlink column. We’re using this hyperlink column to show employee photos. The file (jpg) is in a shared folder in SharePoint. When in Edit Grid mode and I select the cell to edit, my browser (Edge) just opens a new blank tab. I can not edit a single item because the form was reformatted in PowerApps and there’s no longer an option to edit this form. The formatting for the column is below. The option to format the url is picture. {“$schema”:” https://developer.microsoft.com/json-schemas/sp/v2/column-formatting.schema.json”,”elmType”:”a”,”attributes”:{“href”:”@currentField”,”target”:”_blank”},”children”:[{“elmType”:”img”,”style”:{“width”:”150px”},”attributes”:{“src”:”@currentField”}}]} Read More
Introducing Multimodal Embed 3: Powering Enterprise Search Across Images and Text
We are excited to announce that Embed 3, Cohere’s industry-leading AI search model, is now available in the Azure AI Model Catalog—and it’s multimodal! With the ability to generate embeddings from both text and images, Embed 3 unlocks significant value for enterprises by allowing them to search and analyze their vast amounts of data, no matter the format. This upgrade positions Embed 3 as the most powerful and capable multimodal embedding model on the market, transforming how businesses search through complex assets like reports, product catalogs, and design files.
Transform Your Enterprise Search
In the world of enterprise AI, embedding models serve as the engine behind intelligent search applications. These models help employees and customers find specific information within vast libraries of data, enabling faster insights and more efficient decision-making.
How Embed 3 Works
Embed 3 translates input data—whether text or images—into long strings of numbers (embeddings) that represent the meaning of the data. These numerical representations are compared within a high-dimensional vector space to determine similarities and differences. Importantly, Embed 3 integrates both text and image embeddings into the same space, creating a seamless search experience. This advanced capability makes Embed 3 a cornerstone of enterprise search systems, while also playing a crucial role in retrieval-augmented generation (RAG) systems, where it ensures that generative models like Command R have the most relevant context to produce accurate and informed responses.
Real-World Use Cases for Multimodal Search
Every business, regardless of size or industry, can benefit from multimodal AI search. Embed 3 enables enterprises to search not just through text but also images, unlocking new possibilities for insight retrieval. Here are some key use cases:
Graphs & Charts
Visual data is essential for understanding complex information. With Embed 3, users can now search for specific graphs and charts based on a text query, empowering faster, more informed decision-making. This feature is particularly valuable for teams that rely on data-driven insights.
E-commerce Product Catalogs
Traditional search methods often restrict users to text-based queries. With Embed 3, retailers can enhance their product search experiences by enabling customers to find products that match their visual preferences. This transforms the shopping experience, increasing engagement and conversion rates.
Design Files & Templates
Designers typically manage large libraries of assets, making it challenging to find specific files. Embed 3 simplifies this process by allowing designers to search for UI mockups, visual templates, and presentation slides using descriptive text. This accelerates the creative process and streamlines workflows.
Industry-Leading Accuracy and Performance
According to Cohere, Embed 3 sets the standard for multimodal embedding models, offering state-of-the-art accuracy across a variety of retrieval tasks. Whether it’s text-to-text or text-to-image search, Embed 3 consistently outperforms other models on leading benchmarks, including BEIR for text-based retrievals and Flickr/CoCo for image retrieval tasks.
One of the key innovations of Embed 3 is the unified latent space for both text and image encoders. This simplifies the search process by allowing users to include text and image data in a single database without the need to re-index existing text corpora. Furthermore, the model is designed to compress embeddings to minimize database storage costs, ensuring efficiency at scale. It’s also fully multilingual, supporting over 100 languages and maintaining strong performance on noisy, real-world data.
Key Benefits:
Mixed Modality Search: Uniquely proficient at searching across text and images in a unified space.
High Accuracy: State-of-the-art results on industry-standard benchmarks.
Multilingual Support: Compatible with over 100 languages, making it ideal for global businesses.
How to use Embed-3 on Azure ?
Here’s how you can effectively utilize the newly introduced Cohere Embed 3 models in the Azure AI Model Catalog:
Prerequisites:
If you don’t have an Azure subscription, get one here: https://azure.microsoft.com/en-us/pricing/purchase-options/pay-as-you-go
Familiarize yourself with Azure AI Model Catalog
Create an Azure AI Studio hub and project. Make sure you pick East US, West US3, South Central US, West US, North Central US, East US 2 or Sweden Central as the Azure region for the hub.
Create a deployment to obtain the inference API and key:
Open the model card in the model catalog on Azure AI Studio.
Click on Deploy and select the Pay-as-you-go option.
Subscribe to the Marketplace offer and deploy. You can also review the API pricing at this step.
You should land on the deployment page that shows you the API and key in less than a minute. You can try out your prompts in the playground.
The prerequisites and deployment steps are explained in the product documentation. You can use the API and key with various clients. Check out the samples to get started.
Conclusion
Embed 3 with enhanced image search capabilities is available today in Azure AI Model Catalog and Azure AI Studio. You can begin integrating this cutting-edge multimodal model into your enterprise search applications immediately.
Our team is excited to support your journey into multimodal AI search. If you’d like to learn more about Embed 3, we encourage you to sign up for Cohere+ Microsoft webinar on November 12 for a deep dive into its capabilities and how to leverage it for your business. Developers can also access detailed technical information through our API documentation.
Microsoft Tech Community – Latest Blogs –Read More
The Strategic Advantage of AI for the Defense Industrial Base
In an era where technological advancements are rapidly reshaping industries, the Defense Industrial Base (DIB) stands at the threshold of a transformative opportunity through the adoption of Artificial Intelligence (AI). The integration of AI into defense operations promises unprecedented efficiency, operational superiority, and strategic advantages. The Department of Defense believes so much in AI technology that they budgeted $1.8B in FY24 to support AI efforts which could include identifying potential threats or targets on the battlefield. Like DoD, the DIB should plan and look at ways AI can advance key priorities. This blog explores four compelling examples of how AI can revolutionize defense contractors’ operations and highlights key insights from the US Department of Defense AI Adoption Strategy.
1. Predictive Maintenance and Asset Management
AI-driven predictive maintenance systems can forecast equipment failures before they occur, ensuring timely repairs and reducing downtime. By analyzing historical data and real-time sensor inputs, AI algorithms can predict when components will fail and schedule maintenance accordingly. This not only extends the lifespan of critical assets but also significantly reduces costs associated with unexpected breakdowns.
For instance, Microsoft partners like Rolls-Royce and Thales are leveraging AI to enhance their predictive maintenance capabilities. Rolls-Royce uses AI to monitor the health of aircraft engines in real-time, allowing for proactive maintenance and reduced operational improved operational readiness and increased asset availability. Thales, on the other hand, utilizes AI solutions to manage and maintain complex defense systems, ensuring that critical components are serviced before they fail, thereby increasing the reliability and efficiency of their operations.
2.Supply Chain Optimization
AI can enhance supply chain operations by optimizing inventory management, predicting demand, and improving logistics. Machine learning algorithms can identify patterns and trends in supply chain data, enabling defense contractors to maintain optimal inventory levels, reduce lead times, and minimize the risk of supply shortages.
For example, Lockheed Martin has implemented AI-driven supply chain solutions to improve its inventory management. By using predictive analytics, the company can forecast demand for parts and components more accurately, ensuring that they are readily available when needed and reducing excess inventory.
Other Defense contractors are using AI to employ AI techniques to streamline logistics and inventory processes. The AI systems analyze historical data and real-time information to optimize the procurement and distribution of materials, leading to cost savings and increased operational efficiency. For instance, AI can analyze location needs, weather disruptions, etc. to assist in making supply and demand decisions or adjustments.
3. Enhanced Cybersecurity
With the increasing threat of cyberattacks, AI-powered cybersecurity solutions are essential for protecting sensitive defense data and systems. AI can detect and respond to cyber threats in real-time, identifying anomalies and potential breaches faster than traditional methods. This proactive approach is crucial for safeguarding national security.
Microsoft has been at the forefront of integrating AI into cybersecurity through various innovative solutions. Here are a few examples of how AI can enhance cyber operations:
Threat Detection: Continuously monitor network traffic and detect unusual patterns that may indicate a cyber threat. For instance, if the solution identifies a sudden increase in data transfer from a secure server, it can flag this as a potential data exfiltration attempt. For example, we have seen a 38 percent improvement in threat hunting, investigation, and response as measured by SOC time.
Incident Response: When a security incident occurs, AI can assist in automating the initial response steps. It can isolate affected systems, block malicious IP addresses, and gather forensic data for further analysis. For example, we have seen an 84 percent improvement as measured in time in troubleshooting minor issues, thereby freeing up SOC analysts’ valuable time for higher-level and more impactful work.
Security Enhancements: AI can play a significant role in proactively enhancing an organization’s security posture. By analyzing historical data and identifying trends, it can recommend security policy adjustments, patch management strategies, and configuration changes to mitigate potential vulnerabilities. For example, we have seen a 44 percent improvement in accuracy, and a 63 percent improvement in post-incident summarization and reporting.
4. Autonomous Systems and Robotics
AI enables the development and deployment of autonomous systems and robotics for various defense applications. From unmanned aerial vehicles (UAVs) to ground-based robots, AI enhances the capabilities of these systems, allowing for more efficient and safer operations in hazardous environments.
One prominent example of a defense contractor leveraging AI for the US Department of Defense (DoD) is Lockheed Martin. They have integrated AI capabilities into their F-35 fighter jets, enhancing situational awareness and decision-making processes. This allows the jets to process vast amounts of data in real-time to identify and respond to potential threats more effectively.
Another example is Northrop Grumman X-47B unmanned combat air system. The system enables the drone to autonomously refuel mid-air, navigate, and complete missions without direct human intervention, showcasing the potential in autonomous combat operations.
Raytheon Technologies is also at the forefront, developing AI-driven radar systems that can detect and track multiple targets simultaneously. These advanced systems use machine learning algorithms to differentiate between friendly and hostile aircraft, significantly improving situational awareness and response times in complex environments.
Additionally, General Dynamics has been employing AI to enhance cybersecurity measures within the DoD. Their AI-powered systems can detect and neutralize cyber threats in real-time, ensuring the integrity and security of critical defense infrastructure.
Learning from the US Department of Defense AI Adoption Strategy
The US Department of Defense (DoD) has been at the forefront of AI adoption, with a comprehensive strategy aimed at integrating AI across all defense operations. The DoD’s AI Adoption Strategy emphasizes the importance of a cohesive approach, leveraging AI for decision-making, operational efficiency, and enhanced capabilities.
The Defense Industrial Base can incorporate similar strategies and techniques using AI to ensure they continue to innovate.
Strategic Integration: AI should be integrated into core strategic goals, ensuring alignment with overall mission objectives.
Cross-Functional Collaboration: Successful AI adoption requires collaboration across different departments and expertise areas.
Continuous Learning and Adaptation: The dynamic nature of AI technologies mandates continuous learning, adaptation, and improvement.
Ethical Considerations: Ethical implications of AI use must be considered, ensuring compliance with legal and moral standards.
Developing Your AI Adoption Strategy
To harness the full potential of AI, the Defense Industrial Base must take proactive steps towards developing their own AI adoption strategy. This begins with assembling a team of cross-functional leaders who can bring diverse perspectives and expertise to the table. Together, this team can map out a tailored AI strategy that aligns with organizational goals and operational needs.
The integration of AI into is not just an option but a strategic imperative. By leveraging AI for predictive maintenance, supply chain optimization, enhanced cybersecurity, and autonomous systems, defense contractors can achieve remarkable efficiency gains and maintain a competitive edge. Learning from the US Department of Defense AI Adoption Strategy, the DIB can pave the way for a future where AI-driven excellence is the standard.
Assemble your team, define your strategy, and embrace the AI revolution for a secure and efficient future.
Resources:
Building a foundation for AI success: Business strategy
Unpacking AI governance to shape digital transformation
Prepare your data for secure AI adoption
GAO Report – Artificial Intelligence: DOD Needs Department-Wide Guidance to Inform Acquisitions
Fiscal Year 2024 Budget Request Overview
Microsoft Tech Community – Latest Blogs –Read More
Ministral 3B small model from Mistral is now available in the Azure AI Model Catalog
At Microsoft, we are committed to driving innovation in AI by continually enhancing our offerings. On the first anniversary of Mistral 7B, we are excited to continue our collaboration with Mistral to announce the addition of the new state-of-the-art models to the Azure AI Model Catalog: Ministral 3B. This model despite its size is setting a new standard in performance and efficiency.
A New Frontier in AI Performance
According to Mistral, Ministral 3B represents a significant advancement in the sub-10B category, focusing on knowledge, commonsense reasoning, function-calling, and efficiency. With support for up to 128k context length , these models are tailored for a diverse array of applications—from orchestrating agentic workflows to developing specialized task workers.
Enhancing Workflow Efficiency
When used alongside larger language models like Mistral Large, Ministral 3B can serve as efficient intermediary for function-calling in multi-step agentic workflows. Their ability to be fine-tuned allows them to excel in tasks such as:
Input Parsing: Quickly understanding user inputs to streamline processing.
Task Routing: Directing tasks to appropriate models or functions based on user intent.
API Calling: Efficiently interfacing with APIs while minimizing latency and operational costs.
Creating Powerful Agents with Small Models
Two Key Use Cases
Ministral 3B excels in two primary macro use cases:
Multi-Step Agentic Workflows: This use case involves orchestrating complex workflows where agents need to call larger models selectively. Ministral 3B serves as a highly efficient intermediary, identifying the appropriate larger models to invoke, ensuring that the right model is used for the right task in the workflow.
Low-Latency, High-Volume Use Cases: For applications requiring rapid, high-throughput responses—such as real-time customer support, data processing, and high-volume API calls—Ministral 3B delivers exceptional low-latency performance, allowing enterprises to process large volumes of requests with minimal delay.
Versatile Use Cases
Ministral 3B can also be utilized for a wide range of agentic use cases, including:
Customer Support Automation: Enhancing customer interactions with efficient automated responses.
Back Office and Process Automation: Streamlining operations and improving productivity.
Code Migration and CI/CD: Facilitating smoother transitions in software development cycles.
Improved RAG Architectures & Retrieval: Optimizing retrieval-augmented generation tasks.
Moderation and LLM Output Checking: Ensuring the quality and appropriateness of AI outputs.
Agentic Benefits of Smaller Models
Ability: Ministral 3B, and other smaller models in the same category are super performant, fast, and cost-efficient. They excel in function calling, making them ideal for agentic workflows.
Security: These models can be deployed securely and efficiently in your environment, ensuring that your internal data remains private. They can be implemented on a VPC, in the cloud, on-premise, or accessed via our API.
Customizability: Smaller models can be fine-tuned easily for specific tasks and may outperform larger models in certain domains. Their size allows for efficient fine-tuning and retraining, facilitating adaptability to evolving needs.
Agentic Architecture with Small Models
Ministral 3B is designed to work within an efficient agentic architecture that leverages specialized smaller models. Here’s how it works:
User Request Handling: When a user makes a request (e.g., “Please give me my customer number”), it is processed through a router, which can be an embedding model or a larger language model (LLM).
Routing to Specialized Agents: The router directs the request to specialized agents, such as:
Account Management Agent
Fraud Detection Agent
Billing Details Agent
Customer Support Agent
Efficiency Benefits: This architecture is much faster and cheaper than using a single large model. Small and edge models are fine-tuned for specific domain tasks, enabling them to outperform larger general models in many scenarios.
Microservices Approach: The architecture allows for easier adaptation of models compared to a large LLM handling everything. This microservices approach leads to improved performance in function calling and overall user experience.
How to use MInistral 3B on Azure?
Here’s how you can effectively utilize the newly introduced les Ministraux models in the Azure AI Model Catalog:
Prerequisites:
If you don’t have an Azure subscription, get one here: https://azure.microsoft.com/en-us/pricing/purchase-options/pay-as-you-go
Familiarize yourself with Azure AI Model Catalog
Create an Azure AI Studio hub and project. Make sure you pick East US, West US3, South Central US, West US, North Central US, East US 2 or Sweden Central as the Azure region for the hub.
Create a deployment to obtain the inference API and key:
Open the model card in the model catalog on Azure AI Studio.
Click on Deploy and select the Pay-as-you-go option.
Subscribe to the Marketplace offer and deploy. You can also review the API pricing at this step.
You should land on the deployment page that shows you the API and key in less than a minute. You can try out your prompts in the playground.
The prerequisites and deployment steps are explained in the product documentation. You can use the API and key with various clients. Check out the samples to get started.
Conclusion
The introduction of Ministral 3B marks an exciting milestone in our journey to enhance AI capabilities on Azure. By integrating these state-of-the-art models into the Azure AI Model Catalog, we empower developers and businesses to innovate with confidence, leveraging advanced AI solutions for edge computing and on-device applications.
With its combination of low-latency performance, versatility across use cases, and cost-efficiency at $0.04 per million tokens, Ministral 3B is a game-changer for enterprises looking to harness the power of AI without breaking the bank.
Join us in exploring the future of AI with Ministral 3B—where cutting-edge technology meets practical applications for a smarter, more efficient world.
Microsoft Tech Community – Latest Blogs –Read More
Simplifying Genomic Task Execution with GA4GH TES: A Guide for Bioinformatics Workflows
Introduction
The rise of high-throughput sequencing and bioinformatics workflows has led to increasing complexity in managing computational tasks across cloud environments. For many genomics professionals, translating biological questions into scalable, cloud-native processes can be a challenging experience. That’s where the Global Alliance for Genomics and Health GA4GH Task Execution Service (TES) comes into play.
At Microsoft, we’ve been working to simplify how bioinformaticians, developers, and data scientists submit and manage tasks on the cloud through the TES API. In this blog post, we’ll walk you through some practical examples of how you can submit tasks to TES using different tools and languages, such as curl, Nextflow, Python, and C#.
These examples will help you understand the flexibility TES offers to developers and scientists, making it easier to build, submit, and monitor your genomics workflows in the cloud.
Submitting Tasks to TES: A Brief Overview
TES follows a standardized API that abstracts the complexity of managing distributed tasks in the cloud. Whether you’re working with small datasets or scaling up to process terabytes of sequencing data, TES provides a unified interface for running tasks on cloud or on-premises environments.
Here, we will focus on four ways to submit tasks to TES:
Using Curl – A command-line tool for transferring data.
Python Client – For flexible, programmatic access to TES.
C# SDK – For developers working in the .NET ecosystem.
Nextflow Integration – A popular workflow management system for bioinformatics pipelines.
Example 1: Submitting Tasks with Curl
The simplest way to interact with TES is via the command line using curl. Belowis the curl example provided in the official repository:
Prerequisites
Make sure you install jq if not already present. jq is a lightweight and flexible command-line JSON processor.
Create the TES Instance File
You’ll need to define the TES instances in a file named .tes_instances. This file should be in CSV format with two fields/columns: a description of the TES instance and the URL pointing to it.
You can create the file using the following command:
Important: Make sure to replace the example content with your actual TES instance description and URL. Avoid using commas in the description field.
Create the Secrets File
Next, create a secrets file (.env) that will store your environment variables such as TES service credentials and Azure storage account information. You can either set these variables in your shell or directly insert the values into the command below:
TES_SERVER_USER: Your TES service username.
TES_SERVER_PASSWORD: Your TES service password.
TES_OUTPUT_STORAGE_ACCT: Your Azure storage account where the outputs will be saved.
Running the Demo
After setting up the necessary configuration, you’re ready to submit a task using the BWA example. First download the run-bwa.sh script.
Run the following command to submit the task:
./run-bwa.sh
Here’s a summary of what it does:
Load Environment Variables:
It checks for and loads variables from a .env file, such as credentials and storage account information.
TES Task Submission:
The script defines a function submit_task() that submits a task payload (described later) to a TES instance via a POST request using curl. It uses basic authentication based on the environment variables loaded earlier.
TES Task State Monitoring:
Another function, get_task_state(), fetches the current state of a task by making a GET request to the TES instance using the task ID.
TES Instance URL:
The script reads the TES instance URL from a file called .tes_instances, which contains TES instance information. If no instance is found, the script aborts.
Task Payload:
The script constructs a JSON payload to describe the task. This payload includes:
Inputs: FASTQ files and a reference genome (HG38) to be used for BWA.
Outputs: The aligned output BAM file.
Executors: It uses the quay.io/biocontainers/bwa container to execute the BWA commands.
Resources: The task requests 16 CPU cores and 32 GB of RAM.
Submit the Task:
The task payload is submitted to the TES instance, and the script logs the full response for debugging. If the task is submitted successfully, it extracts the task ID.
Monitor Task Status:
The script monitors the task’s state every 5 seconds until it reaches one of the final states: COMPLETE, EXECUTOR_ERROR, SYSTEM_ERROR, or CANCELLED.
After the pipeline completes, all results will be saved in the Azure Blob Storage container outputs/curl.
This example demonstrates how to create a simple TES task in JSON format and submit it to a TES server running locally. The task definition includes input files, outputs, resource requirements, and a Docker container image to execute the task.
Example 2: Submitting Tasks with Python (py-tes)
Python is a widely used language in bioinformatics, and the py-tes package simplifies programmatic submission of tasks to TES. Below describes the py-tes example.
Prerequisites
To get started with py-tes, you need to install the required dependencies and set up the necessary configuration files. You can use Conda or the faster alternative, Mamba (recommended). to install these dependencies.
Install Dependencies
If you are using Conda or Mamba, you can create the environment with the following command:
This command will install all the dependencies listed in the environment.yml file.
Create the TES Instance File
You’ll need to define the TES instances in a file named .tes_instances. This file should be in CSV format with two fields/columns: a description of the TES instance and the URL pointing to it.
You can create the file using the following command:
Important: Make sure to replace the example content with your actual TES instance description and URL. Avoid using commas in the description field.
Create the Secrets File
Next, create a secrets file (.env) that will store your environment variables such as TES service credentials and Azure storage account information. You can either set these variables in your shell or directly insert the values into the command below:
TES_SERVER_USER: Your TES service username.
TES_SERVER_PASSWORD: Your TES service password.
TES_OUTPUT_STORAGE_ACCT: Your Azure storage account where the outputs will be saved.
Running the Demo
After setting up the necessary configuration, you’re ready to submit a task using the BWA example. First download the run-bwa.py script.
Run the following command to submit the task:
./run-bwa.py
This script will read the .tes_instances file to identify the TES instances and submit the task using the credentials and storage account information provided in the .env file.
Compared to the curl example this script does the following:
Task Submission:
The script loops through the available TES instances and submits the task to each instance using the py-tes client.
If the task is submitted successfully, the task ID is logged. Otherwise, any error is caught and logged.
Helper Functions:
csv_to_dict: Reads the .tes_instances file and converts the contents into a dictionary for easy lookup of TES instance URLs.
submit_task: Submits a task to the specified TES instance using the py-tes client, with basic authentication (if required).
get_task_state: Fetches the current state of a submitted task using its task ID.
After the pipeline completes, all results will be saved in the Azure Blob Storage container outputs/py-tes.
This Python example illustrates how to interact with the TES API programmatically, making it easy to define and submit tasks from your scripts or applications.
Example 3: Submitting Tasks with C#
For developers working in the .NET ecosystem, the C# SDK provides a convenient way to interact with TES. Below describes C# example:
Prerequisites
Before running the TES SDK examples, ensure you meet the following requirements:
.NET 8.0 SDK or higher: You need the .NET SDK to build and execute the C# examples. You can download it from the official .NET site.
Azure CLI: This tool is essential for authenticating and accessing Azure resources such as Blob Storage. You can log in to your Azure account by running the following command, which uses device authentication:
User Secrets: Use User Secrets in .NET to securely store the credentials for the TES service and Azure Blob Storage. You’ll need to configure the following secrets:
TesCredentialsPath: Path to TesCredentials.json (created during TES deployment).
StorageAccountName: Name of your Azure Blob Storage account (used to store output files).
Example commands to configure User Secrets:
Opening and Building the Project
Open the TES Solution: Launch Visual Studio and open the Microsoft.GA4GH.TES.sln file from the TES project. This file is located in the root directory of the TES repository.
Using the TES.SDK.Examples Project: The TES.SDK.Examples project in the solution contains various sample console applications to demonstrate how to interact with the TES API.
Building a Single-File Executable
For ease of deployment, you can package the demo application as a single-file executable. Below are the instructions for both Linux and Windows.
For Linux
Run the following command to publish the demo application as a single-file executable for Linux:
dotnet publish –configuration Release –output ./publish –self-contained –runtime linux-x64 /p:PublishSingleFile=true
For Windows
To publish the demo application for Windows:
dotnet publish –configuration Release –output ./publish –self-contained –runtime win-x64 /p:PublishSingleFile=true
Note: Replace linux-x64 or win-x64 with your desired platform runtime (e.g., osx-x64 for macOS).
This will create a single-file executable in the ./publish directory.
Running the TES SDK Examples
After building the project, you can run the TES SDK examples by executing the single-file executable you just created. Here are two key examples you can run.
1. Prime Sieve Example
This example submits a TES task that calculates prime numbers in a specified range. Run the following command:
./Tes.SDK.Examples primesieve [taskCount]
taskCount: (Optional) Number of tasks to run. Each task processes a range of 1,000,000 numbers. If omitted, it defaults to 1.
Example:
./Tes.SDK.Examples primesieve 10
This command will submit 10 tasks, each calculating prime numbers in a distinct range.
2. BWA Mem Example
This example submits a TES task to run the BWA Mem algorithm, which is widely used for aligning sequence reads to a reference genome.
./Tes.SDK.Examples bwa
The output from this task (such as the aligned BAM file) will be stored in your configured Azure Blob Storage account.
This example shows how to use the C# SDK to submit a task to a TES instance, making it easy for .NET developers to integrate TES into their cloud-native applications.
Example 4: Nextflow and TES Integration
For bioinformaticians managing complex workflows, Nextflow provides a powerful and flexible way to orchestrate tasks. Nextflow can directly submit tasks to TES, making it easier to scale your workflows across cloud environments.
In this Nextflow example, we configure Nextflow to use TES as the execution backend:
Prerequisites
Before you start, ensure you have the following installed and configured:
For Linux or macOS, run the following commands in your terminal to install Nextflow:
Nextflow: Nextflow requires Java (version 8 or 11) and can be installed on Linux, macOS, or Windows.
For Windows, you’ll need to install Windows Subsystem for Linux (WSL) first. After setting up WSL, follow the Linux installation instructions.
For detailed steps, refer to the official Nextflow installation guide.
Java (version 8 or 11): Nextflow relies on Java, so ensure you have a compatible version installed on your machine.
Azure CLI: You’ll need this tool to authenticate and manage Azure resources such as Blob Storage.
Log in to your Azure account using:
Configuring Nextflow for TES
To connect Nextflow with your TES instance and Azure storage, you need to create a configuration file (tes.config) containing your TES and Azure credentials. Here’s an example of how the file should look:
process {
executor = ‘tes’
}
azure {
storage {
accountName = “<Your storage account name>”
accountKey = “<Your storage account key>”
}
}
tes.endpoint = “<Your TES endpoint>”
tes.basicUsername = “<Your TES username>”
tes.basicPassword = “<Your TES password>”
accountName: The name of your Azure Blob Storage account.
accountKey: Your Azure Blob Storage account key.
tes.endpoint: The URL of your TES endpoint.
tes.basicUsername and tes.basicPassword: Your TES service credentials.
Running the Nextflow Pipeline
To help you get started quickly, we’re using the nf-hello-gatk project, a sample Nextflow pipeline designed to showcase Nextflow’s capabilities with TES and Azure integration.
Use the following command to run the pipeline:
./nextflow run seqeralabs/nf-hello-gatk -c tes.config -w ‘az://work’ –outdir ‘az://outputs/nextflow’ -r main
Here’s what each part of the command does:
-c tes.config: Specifies the configuration file with your TES and Azure credentials.
-w ‘az://work’: Defines the Azure Blob Storage container for intermediate workflow files.
–outdir ‘az://outputs/nextflow’: Specifies the output directory in Azure Blob Storage where the results will be saved.
-r main: Specifies the branch of the repository to run (in this case, the main branch).
Viewing the Results
Once the pipeline completes, all results will be saved in the Azure Blob Storage container specified by the –outdir flag.
By using Nextflow with TES, bioinformaticians can easily parallelize tasks across different compute environments while maintaining full control of task execution through Nextflow’s powerful DSL.
Conclusion
The flexibility of the GA4GH TES API allows users to submit and manage tasks in a wide variety of environments—whether you’re working from the command line with `curl`, managing workflows with Nextflow, writing scripts in Python, or building applications in C#.
Microsoft is committed to providing tools that help simplify the genomic workflow experience across cloud platforms. With TES, you can easily scale your bioinformatics tasks in a consistent and cloud-agnostic manner. Try out these examples in your own workflows and see how TES can help streamline your bioinformatics operations.
If you’d like to dive deeper, check out the full documentation and examples on the GA4GH-TES GitHub repository, also stay tuned for new ways to interact with the API.
Microsoft Tech Community – Latest Blogs –Read More
Enhancing Security with CISA’s ScubaGear Baselines for M365
In today’s digital age, securing an organization’s information is more critical than ever. The Cybersecurity and Infrastructure Security Agency (CISA) stood up a program called Secure Cloud Business Applications (SCuBA). The program was in response Solorigate in 2020, and the discovery of common cybersecurity gaps that negatively impacted organizations’ risk. One of the project’s primary purposes is to provide guidance toward bettering the security posture of cloud environments.
The SCuBA program provides a valuable assessment tool called ScubaGear to provide reports that help harden Microsoft 365 environments. Microsoft has worked together with CISA to produce and maintain the secure configuration baselines for ScubaGear as well as an accompanying PowerShell script tool to scan M365 environments. This tool was directed at better securing the commonly misconfigured settings that enabled the adversaries to move laterally to cloud environments, gain access to data, or stay undiscovered. CISA sought to have a mechanism to check for secure configurations in the M365 cloud environment of any organization. Thus, the ScubaGear tool was born.
The CISA and Microsoft partnership within the SCuBA program provides a unified approach to cloud application security and facilitates the sharing of best practices and threat intelligence as organizations work to better secure their environments. This post focuses on the benefits of hardening M365 and outlines some important steps to follow when using ScubaGear to scan and provide reports to assist in finding security settings that better the security posture of tenants.
What is ScubaGear?
ScubaGear is designed to identify weak security configurations of cloud-based business applications used by federal agencies but can be utilized by any organization. ScubaGear provides comprehensive guidelines and standards to assist cloud environments in meeting security requirements. This includes best practices for configuration management, and monitoring of those environments. Baseline implementation guides can be found at Secure Cloud Business Applications (SCuBA) Project | CISA.
The PowerShell source code and download for the tool can be found at GitHub – cisagov/ScubaGear: Automation to assess the state of your M365 tenant against CISA’s baselines. For easier installation, you can utilize PowerShell Gallery (https://www.powershellgallery.com/packages/ScubaGear/1.3.0) to start your scanning journey (Install-Module -Name ScubaGear). Installing and running the tool provides the capability of conducting security assessments of cloud environments via PowerShell and Open Policy Agent to check compliance with the implementation guides. The combination of PowerShell and the Open Policy Agent allows anyone to check compliance with the latest ScubaGear standards through a means of automatically comparing the output of the tool with CISA’s baselines.
The tool is intended to help organizations comply with various security regulations and policies. It aligns with federal mandates and frameworks, and helps systems align to security standards. A report is generated that shows where organizations have appropriately hardened their needed security controls. The tool may align with other security frameworks, but that alignment has not been done. Not all suggestions by the tool may meet the risk posture or appetite for every organization, but the tool does provide valuable insight and information regarding an infrastructure’s current security posture.
Benefits of Hardening Microsoft 365
Hardening your Microsoft 365 environments helps organizations to safeguard their data against potential threats. By implementing robust security measures, you can:
Enhance data protection and privacy.
Reduce the risk of unauthorized access.
Improve compliance with industry standards and regulations.
Improve upon logging.
Key Services Checked by ScubaGear
ScubaGear checks for several critical settings across various Microsoft services to provide recommended changes targeted at building more comprehensive security controls. Key settings are included for the following services:
Entra ID: Enforces secure identity management and access controls, like conditional access are in place.
Defender: Provides advanced threat protection, data loss prevention (DLP) and real-time monitoring settings.
Exchange Online: Looks for phishing settings and other email security options (i.e., DKIM)
Power Platform: Recommends changes to data and application settings within the Power Platform ecosystem.
SharePoint/OneDrive: Addresses security settings for sharing and other site permissions.
Teams: Recommends controls for more secure communication and collaboration within Microsoft Teams.
The ScubaGear team is looking to expand into further M365 services in the future.
By following the steps outlined in this document and using ScubaGear, you can significantly enhance the security of Microsoft 365 environments. ScubaGear’s guidelines and best practices can help you stay ahead of potential threats and foster a secure digital environment for your organization.
Important Steps for Using ScubaGear
To effectively use ScubaGear, it is essential to follow a regiment of regular scans and checks. Here are some key steps to consider:
Regular Scanning: Schedule regular scans of your Microsoft 365 environments to identify and address potential vulnerabilities. Settings can fluctuate over time, and ScubaGear allows for scanning of environment settings to see if there is any deviation from a secure baseline.
Review and Update Security Policies: Validate that your security policies are up-to-date and aligned with the latest best practices.
Implement Recommended Settings: Apply the recommended settings provided by ScubaGear to enhance your security posture.
Stay Connected with the Microsoft Public Sector Tech Community
Continue the conversation on advancing technology in government and public services. Join the Microsoft Public Sector Tech Community to connect with peers, share insights, and engage in discussions on IT solutions for government in the discussion space. For updates on cloud security, compliance, and digital transformation, follow the Public Sector Blog.
Microsoft Tech Community – Latest Blogs –Read More
How do I void an invoice in ℚB Online?
I’m new to ℚB Online and made a mistake on an invoice that I need to void. I’m unsure how to properly void it without messing up my records. Can anyone guide me through the steps to void an invoice in ℚB Online? Any help would be appreciated!
I’m new to ℚB Online and made a mistake on an invoice that I need to void. I’m unsure how to properly void it without messing up my records. Can anyone guide me through the steps to void an invoice in ℚB Online? Any help would be appreciated! Read More
Drives search endpoint suddenly stopped working
We are facing an issue where the following endpoint has suddenly stopped working (as of 2024-10-23) for only one of our accounts:
https://graph.microsoft.com/v1.0/me/drive/root/search(q=’.xls’)
I am able to see the Excel workbooks I’m searching for /drive/root/children endpoint, but the /search is returning no results in one of the two accounts. I know indexing can be an issue that causes a delay for results to show up from /search, but many of the files on the affected account have been there for 6 months.
Comparing the working account to the failing one, I don’t see any significant difference in the requests. For both accounts, the OAuth scopes requested are the same (from the decoded Bearer token):
“scp”: “AllSites.Read Files.Read Files.Read.All Files.Read.Selected Files.ReadWrite Files.ReadWrite.All Files.ReadWrite.AppFolder Files.ReadWrite.Selected Sites.Read.All Sites.ReadWrite.All profile openid email”
Both accounts are Business OneDrive accounts. I can even access the workbooks directly on the failing account from the following endpoint:
So it seems that /search is simply not working. Our tool is reliant on the /search endpoint, so it would not be a quick fix to change this and it is affecting some of our customers as well.
Is this a known issue or does anyone have another clue on what to check? I’m wondering if search indexing is broken on this account (and our customer’s account), but I don’t know of any way to force it to re-index the files.
We are facing an issue where the following endpoint has suddenly stopped working (as of 2024-10-23) for only one of our accounts: https://graph.microsoft.com/v1.0/me/drive/root/search(q=’.xls’) I am able to see the Excel workbooks I’m searching for /drive/root/children endpoint, but the /search is returning no results in one of the two accounts. I know indexing can be an issue that causes a delay for results to show up from /search, but many of the files on the affected account have been there for 6 months. Comparing the working account to the failing one, I don’t see any significant difference in the requests. For both accounts, the OAuth scopes requested are the same (from the decoded Bearer token): “scp”: “AllSites.Read Files.Read Files.Read.All Files.Read.Selected Files.ReadWrite Files.ReadWrite.All Files.ReadWrite.AppFolder Files.ReadWrite.Selected Sites.Read.All Sites.ReadWrite.All profile openid email” Both accounts are Business OneDrive accounts. I can even access the workbooks directly on the failing account from the following endpoint: https://graph.microsoft.com/v1.0/me/drive/items/017ZHZ4ENXPMNDLB52LFF3ZX55FHHUGZ3F/workbook/worksheets So it seems that /search is simply not working. Our tool is reliant on the /search endpoint, so it would not be a quick fix to change this and it is affecting some of our customers as well. Is this a known issue or does anyone have another clue on what to check? I’m wondering if search indexing is broken on this account (and our customer’s account), but I don’t know of any way to force it to re-index the files. Read More
How to Fix Error Initializing QBPOS Application Log on Windows 10?
I’m running ℚB POS on Windows 10, but I keep getting an “Error Initializing QBPOS Application Log.” I’ve tried restarting the system, but the issue persists. Has anyone experienced this before or know a solution to fix it? Any help or suggestions would be greatly appreciated!
I’m running ℚB POS on Windows 10, but I keep getting an “Error Initializing QBPOS Application Log.” I’ve tried restarting the system, but the issue persists. Has anyone experienced this before or know a solution to fix it? Any help or suggestions would be greatly appreciated! Read More
Microsoft Connected Cache for Enterprise and Education preview
Delivery Optimization and Microsoft Connected Cache are comprehensive solutions from Microsoft for minimizing internet bandwidth consumption. Delivery Optimization acts as the distributed content source and Connected Cache acts as the dedicated content source. Organizations have benefited from these solutions, realizing significant bandwidth savings of up to 98 percent with Windows 11 upgrades, Windows Autopilot device provisioning, Microsoft Intune application installations, and monthly update deployments.
Until now, Connected Cache could only be deployed to Configuration Manager with distribution points. With the release of Connected Cache for Enterprise and Education to public preview on October 30, organizations will have more flexibility in deploying Connected Cache directly to host machines running Windows Server, Windows client, and Linux [Ubuntu and Red Hat Enterprise Linux (RHEL)].
Supporting scenarios that are important to enterprises
While Delivery Optimization is mostly known for being a peer-to-peer delivery solution, it’s also the downloader that pulls update content in Windows from the cloud and provides enterprise and education users with tools to manage bandwidth traffic, throttling capabilities, and more.
Connected Cache technology complements Delivery Optimization as a dedicated software caching solution that can be deployed within enterprise and education organizations’ networks. Once deployed to host machines within a network, Connected Cache nodes will transparently and dynamically cache the Microsoft-published content that downstream Windows devices need to download. Using this solution, content requests from Delivery Optimization can be served by the locally deployed Connected Cache node instead of the cloud. This results in fast, bandwidth-efficient delivery across connected devices. Microsoft worked closely with numerous enterprise and education organizations to gather information about their bandwidth management needs. We used the great feedback we received to develop Connected Cache as a solution that supports the scenarios most important to you.
Moving from on premises to hybrid or fully cloud-managed scenario
Enterprises and educational institutions have used solutions like Configuration Manager for device management and content distribution. Many of these organizations:
Manage all or part of their device tenant with Intune or other mobile device management (MDM).
Are tasked with decommissioning their Configuration Manager distribution points.
Are still faced with the challenge of managing content delivery bandwidth.
To support the on-prem to hybrid or fully cloud managed scenario, Connected Cache can be deployed directly to hardware or a virtual machine (VM) running either Windows Server 2022 using Windows Subsystem for Linux (WSL) 2, which is an enterprise-ready, lightweight, first-party solution, or certain Linux distros (Ubuntu 22.04 and RHEL 8 and 9).
Branch office
Many enterprises and educational institutions have a global presence with remote locations where:
Hundreds of Windows workstations are present.
No dedicated server hardware or administrator is present on-site.
Internet bandwidth may be limited and/or internet connectivity may be intermittent.
Reserving bandwidth for office operations may be more important than download performance of Microsoft content.
To support the branch office scenario, Connected Cache can be deployed directly to Windows 11 workstations using WSL 2.
Enterprise or educational sites
The traditional enterprise or educational site occupies one or more buildings, and may have multiple locations where:
Hundreds to thousands of Windows workstations, Windows servers, or virtual machines are present.
Reuse of existing hardware is important (decommissioned Configuration Manager distribution point, file server, cloud print server) or dedicated server hardware is available on-site.
Internet bandwidth may range from great to limited (T1), and/or internet connectivity may be intermittent.
Reserving bandwidth for office or educational operations, especially during peak times, is a top priority.
Performant downloads are necessary to support mass update, upgrade, or Autopilot provisioning operations.
To support the enterprise or educational site scenarios, Connected Cache can be deployed directly to hardware or VMs running Windows Server 2022. Deployments can be made using WSL 2. or certain Linux distros (Ubuntu 22.04 and RHEL 8 and 9).
Bulk management and deployment
Connected Cache Azure resources are typically managed using the Microsoft Azure portal web interface, but they can also be managed using Command-Line Interface (CLI). Connected Cache nodes can be remotely deployed via PowerShell or Linux shell scripts that require no direct user input, enabling deployment of cache nodes without on-site presence.
Telemetry by content type
Organizations want to have insights into the health of cache nodes and what content is being delivered to their devices. The Connected Cache Azure portal displays a near real-time and historical view of the outbound traffic in Mbps and volume by content type. These insights help ensure the cache is deployed correctly and devices are successfully pulling content from it. Further details such as cache efficiency (expressed as the percentage of content coming from Connected Cache), per site data, and per country data, will be available in Windows Update for Business reports.
Deploy Microsoft Connected Cache for Enterprise and Education
Starting October 30, 2024, Windows Enterprise (E3, E5, and F3) and Windows Education (A3 and A5) users will be able to use the Azure Marketplace to create “Microsoft Connected Cache for Enterprise and Education” Azure resources that will be used to manage Connected Cache deployments. Once the Connected Cache Azure resource has been created, users can create as many cache nodes as required to support their network topologies or content deployment. Please see the Microsoft Connected Cache for Enterprise and Education documentation overview page for more details.
Continue the conversation. Find best practices. Bookmark the Windows Tech Community, then follow us @MSWindowsITPro on X and on LinkedIn. Looking for support? Visit Windows on Microsoft Q&A.
Microsoft Tech Community – Latest Blogs –Read More
Purview Data Quality – Run Profile Failed – it shows DQ-Asset not found
Hi Everyone,
In purview, I have successfully published data product with power bi data asset added to it. Added the DQ connection also successfully. Assets are identified successfully. Still then I get this error – DQ-Asset not found. Please check if asset exist and try again.
Please help if anyone has encountered this issue and resolved this one.
Thanks,
Pallavi
Hi Everyone,In purview, I have successfully published data product with power bi data asset added to it. Added the DQ connection also successfully. Assets are identified successfully. Still then I get this error – DQ-Asset not found. Please check if asset exist and try again.Please help if anyone has encountered this issue and resolved this one.Thanks,Pallavi Read More
Discover AI skill building with Microsoft Training Services Partners
To fully realize the benefits that AI can bring to your organization, your workforce needs to know how to use AI-powered tools and services and how to integrate them into existing workflows. As the World Economic Forum Future of Jobs 2023 report explains, “Training workers to utilize AI and big data ranks third among company skills training priorities in the next five years and will be prioritized by 42% of surveyed companies.” Whether you have a well-established training framework for your employees or you’re just starting to set up your skill-building processes, validating that your training programs are up to date with the latest technical skills can be a challenge.
Microsoft Training Services Partners (TSPs) can help your workforce quickly build these cutting-edge skills. TSPs teach IT skills that directly transfer from the classroom to the workplace, and they offer:
Deep technical expertise and experience in Microsoft’s AI apps and services.
Skills assessments and customized training plans, based on your organization’s needs and objectives.
In-depth training solutions taught by Microsoft Certified Trainers (MCTs) and centered around Microsoft Official Courseware and Microsoft Credentials, using content from Microsoft Learn.
Self-paced and instructor-led courses, virtually and in person, making the most of local market expertise, videos, interactive labs, and Microsoft Certification preparation resources.
Prioritizing AI training is essential
TSPs around the world have helped employees in countless organizations build their skills in AI and other Microsoft technologies. Check out the following stories from Skillsoft and Sulava to find out how TSPs are helping learners’ skill up on AI.
Skillsoft: A global presence for AI skills training
TSP Skillsoft has a global presence, serving over 60% of the Fortune 1000, with customers across 160 countries or regions. In conjunction with Microsoft, Skillsoft developed the AI Skill Accelerator program, a combined curriculum and training approach to prepare workers to take advantage of AI tools, including Microsoft Copilot and Azure OpenAI Service. As a Copilot “customer zero,” Skillsoft trainers know well how to help customers skill up since they’ve built their own skills by going through the same learning experiences.
This comprehensive program is aimed at accelerating your organization’s ability to integrate AI capabilities into the business and enabling your employees to develop practical AI skills that can be applied directly to their work. It includes a generative AI tool for the business customer, Skillsoft CAISY™ Conversation AI Simulator, Skills Benchmark assessments, and customizable content. Skillsoft can upskill your entire organization on Microsoft’s AI apps and solutions, including Copilot and Azure AI.
“We recently adopted Microsoft Copilot at Skillsoft and launched a multi-step Copilot Enablement Program to help all team members build familiarity with Copilot and understand how the technology can increase their daily productivity. As we advance our AI transformation, we are leveraging our AI Skill Accelerator program to upskill our team, which is validating the importance of a structured enablement program to take full advantage of Copilot capabilities and other emerging GenAI tools.”
—Orla Daly, Chief Information Officer (CIO), Skillsoft
Get more information about how Skillsoft can accelerate AI skill building across your organization.
Sulava: A pioneer in AI skill building
Finland-based Sulava, a leading TSP, is a pioneer in helping organizations adopt AI. In its collaboration with Indutrade, a technology and industrial company, Sulava provides ongoing training sessions for Indutrade employees. Over a six-year period, Sulava has conducted more than 180 of these sessions, covering topics such as Microsoft’s AI apps and services, Office 365 apps, and Microsoft Power Platform, taking advantage of modern work methods and tools, and cybersecurity. The training sessions address current skilling needs, leading to improved efficiency, job satisfaction, and reduced support ticket volumes.
More recently, the collaboration has expanded to include Sulava’s Copilot Adoption Service, facilitating the implementation of Microsoft 365 Copilot and providing ongoing training to ensure that Indutrade and its subsidiaries can effectively use AI and work more efficiently.
“I think it’s very important that we chose Sulava for this service because it’s too much to expect people to figure out using AI on their own. The transition of work requires a lot of effort, and it needs support and training.“
—Jarmo Kaijanen, Chief Information Officer (CIO), Indutrade
Read all the details in Indutrade Develops Skills from AI to Cybersecurity.
These are just two of the inspiring stories that illustrate how TSPs can assist your organization with AI skill building. If you and your teams are ready to skill up with the experts, find a TSP near you.
Learn Microsoft AI with our blog series: Get AI ready
Check out other articles from our AI-focused series of blog posts exploring perspectives and opportunities to acquire critical AI skills.
Get AI ready: Build skills with Microsoft Learn
Get AI ready: Inside the Copilot Learning Hub
Microsoft Tech Community – Latest Blogs –Read More
Update | Viva Connections Feed web part and video news link retirement
As a follow-up to our initial announcement on the retirement of the Viva Connections Feed Web Part and Video News Link, we wanted to remind you of the upcoming changes and the actions you need to take.
Progress Update
On September 1, 2024, we successfully removed the Feed for Viva Connections web part from the toolbox and the Video news link from the + New menu in the organizational site. This was the first phase of our plan to streamline and enhance the user experience across our applications.
What’s Next?
We are now approaching the second phase of this retirement plan. Starting November 5, 2024, we will end support for both the Feed for Viva Connections web part and the Video news link. This means that any instances of these features that have not been replaced with the recommended alternative solutions will no longer display content and will result in an empty web part or an error message.
Action Required
If you have not yet taken action, we strongly encourage you to do so before November 5, 2024. Site editors should update the affected sites with the recommended alternative solutions, such as the News, Viva Engage, File and Media, and Highlighted content web parts, as well as video pages.
For more detailed information and support guidance, please refer to the Viva Connections Feed Web Part and Video News Link Retirement – Support Guidance.
Microsoft Tech Community – Latest Blogs –Read More
Microsoft’s 2024 Global Diversity & Inclusion Report: Our most global, transparent report yet
Today, I am sharing Microsoft’s 2024 Diversity & Inclusion Report, our most global and transparent report to date. This marks our sixth consecutive annual report and the eleventh year sharing our global workforce data, highlighting our progress and areas of opportunity.
Our ongoing focus on diversity and inclusion is directly tied to our inherently inclusive mission — to empower every person and every organization on the planet to achieve more, enabling us to innovate in the era of AI. As we approach our company’s 50th anniversary, we remain deeply committed to D&I because it is what creates transformational solutions to the most complex challenges for customers, partners and the world.
Key data
We gather a range of data, which is presented in specific ways throughout the report. In the following section, it is important to understand the distinction between our *Broader Microsoft business and our **Core Microsoft business.
New and expanded data
Datacenters: As we lead the AI platform shift, our workforce continues to expand to include employees with varied backgrounds and roles, and we are sharing new data this year on a growing employee population in datacenter roles. The population of datacenter employees grew 23.9% globally and 28.9% in the US in 2024, more than tripling since 2020.
In our most global report to date, we expanded new global Self-ID data to include Indigenous and military employees, as well as those with disabilities. For example, 5.7% of global employees in our core Microsoft business self-identified as having a disability, an increase of 0.2 percentage points year over year.
We continue to have pay equity. For median unadjusted pay analysis, the data shows we have made progress in narrowing the gaps. This year we expanded pay equity analysis and median unadjusted pay analysis to not only include women inside and outside the US, but also include a combined view of women globally. Increasing representation for women and racial and ethnic minority groups at more senior levels, combined with maintaining pay equity for all, will continue to reduce the median unadjusted pay gap.
Representation
Representation of women in our core Microsoft workforce is 31.6%, an increase of 0.4 percentage points year over year. Additionally, the representation of women in technical roles is 27.2%, an increase of 0.5 percentage points year over year.
Representation of women in our core Microsoft workforce rose year over year at all leadership levels except Executive.
Leadership representation in our core Microsoft workforce of Black and African American employees at the Partner + Executive level grew to 4.3%, an increase of 0.5 percentage points year over year. Leadership representation in our core Microsoft workforce of Hispanic and Latinx employees at the Executive level rose to 4.6%, an increase of 0.8 percentage points year over year.
In our broader Microsoft workforce, representation of racial and ethnic minority groups is 53.9%, an increase of 0.6 percentage points year over year.
Culture and inclusion in focus
Employee sentiment and engagement
Our semi-annual Employee Signals survey focuses on employee experience and helps us deepen our understanding so we can adjust our efforts where needed. These insights show that employees continue to feel like they are thriving, with a global and US score of 76. Within Employee Signals, we focus on thriving, which we define as “being energized and empowered to do meaningful work.” This is designed to measure employees’ sense of purpose, which is important to personal and professional fulfillment. We survey employees on three dimensions of thriving: Feeling energized, feeling empowered and doing meaningful work.
Our Daily Signals survey results indicate employee perceptions around Microsoft’s commitment to creating a more diverse and inclusive workplace increased two points year over year to an average score of 79.
Since introducing the concept of allyship to employees in 2018, we have inspired and led a positive impact on our culture. As of June 2024, 95.6% of employees reported some level of awareness of the concept of allyship, up from 65.0% in 2019 when we first started asking employees about their awareness.
A commitment that spans decades
Our annual D&I report not only reviews our data, but also illuminates the intentional strategy and actions that have helped us make progress across our company’s journey.
Examples include:
Being one of the first Fortune 500 companies to expand antidiscrimination policy and benefits to LGBTQIA+ employees in 1989.
Announcing our Racial Equity Initiative in June 2020, outlining actions and progress we expect to make by 2025 to help address racial injustice and inequity in the US for Black and African American communities.
Launching immersive D&I learning simulations in 2021, allowing employees to practice crucial D&I skills, such as recognizing and addressing bias, responding to microaggressions and demonstrating effective allyship.
Building on more than a decade of helping to reskill military service members through our Microsoft Software and Systems Academy (MSSA), and this year expanding this skilling opportunity to train military spouses for portable, in-demand IT roles.
Introducing pronouns and self-expression features in Microsoft 365, an innovation brought directly to fruition because we listened to, and collaborated with, customers, partners and employees who asked for these features.
A mission as bold as ours
At Microsoft, we’re guided by our mission, worldview and culture. Our mission is the why; it drives our actions. Our worldview is the what, shaping our strategy and products. Culture is the how, influencing everything with a focus on growth and innovation. Culture is also the who: Who makes up the workforce, who services our customers, who innovates the future of tech. The diversity of the workforce, combined with inclusion, unlocks individual and collective potential. This is what is necessary to stay relevant, compete at scale and win.
Every person. Every organization. Every day. Everywhere.
Here’s to making progress for the next 50 years.
Lindsay-Rae
Notes
* Broader Microsoft business: Includes the core Microsoft business, plus minimally integrated companies. Employees of joint ventures and newly acquired companies are not included in the data, including Activision, Blizzard, and King. LinkedIn was acquired in December 2016, GitHub was acquired in June 2018, and Activision, Blizzard, and King were acquired in October 2023. We provide standalone data for these three acquisitions. Nuance Communications was acquired in March 2022 and fully integrated in August 2023.
**Core Microsoft business: Represents 88.4% of the worldwide broader Microsoft workforce.
The post Microsoft’s 2024 Global Diversity & Inclusion Report: Our most global, transparent report yet appeared first on The Official Microsoft Blog.
Today, I am sharing Microsoft’s 2024 Diversity & Inclusion Report, our most global and transparent report to date. This marks our sixth consecutive annual report and the eleventh year sharing our global workforce data, highlighting our progress and areas of opportunity. Our ongoing focus on diversity and inclusion is directly tied to our inherently inclusive…
The post Microsoft’s 2024 Global Diversity & Inclusion Report: Our most global, transparent report yet appeared first on The Official Microsoft Blog.Read More
AI Community Day – Boost AI Workflow Productivity
Hey everyone!
Thanks for joining out session today at the AI Community Day. Here you can find the resources that have been shared during the session and our contact links.
Resources
Gen APIM Samples Repository
The Azure Developer CLI
Azure OpenAI Assistants
Liam Hampton
LinkedIn
Chris Noring
LinkedIn Read More
keyboard shortcuts
alt+ ctrl+ right arrow for grouping items is not working in excel what could be the possible reason for it and how could i fix it
alt+ ctrl+ right arrow for grouping items is not working in excel what could be the possible reason for it and how could i fix it Read More
Error signing app package with certificate from Azure Key Vault
Hello,
we have a new (EV) code signing certificate stored in Azure Key Vault.
Signing any of our [installer].exe files with the AzureSignTool.exe via command line, for example, is no problem. However, when it comes to signing any of our Windows application packages (*.msix), the signing fails (via VS or via command line).
Signing via VS
When we publish our app in VS, we can select our certificate directly from Azure:
Signing then fails with the following error message:
What is noticeable is that Identity.Publisher in the Package.manifest file has been replaced with: “CN=GlobalSign GCC R45 EV CodeSigning CA 2020, O=GlobalSign nv-sa, C=BE”. This string matches the Issuer of our certificate, but not the subject. Is that how it’s intended?
Nevertheless, when we disable automatic signing with (AppxPackageSigningEnabled = false) and replace Identity.Publiser with the subject of our certificate (“CN=3D.aero GmbH, , ST=Hamburg, …”), validation fails with error message: “Validation error. error C00CE169: App manifest validation error: The app manifest must be valid as per schema: [..]”. We have found out that the RegEx does not allow an ST key during validation. So we replaced it with S and building completes.
Signing via command line
However, if we then try to sign the built MSIX package with the AzureSignTool.exe via command line, this fails with the following error message:
“The Publisher Identity in the AppxManifest.xml does not match the subject on the certificate.”
We assume this is due to the ST <> S problem?
When using “CN=GlobalSign GCC R45 EV CodeSigning CA 2020, O=GlobalSign nv-sa, C=BE” manually, AzureSignTool also fails with the same message.
Questions
Why does signing fail if the certificate was automatically selected via the wizard?his string matches the Issuer of our certificate, but not the subject. Is that how it’s intended?The RegEx for the validation of Identity.Publisher does not allow ST, can this be fixed?
Hello, we have a new (EV) code signing certificate stored in Azure Key Vault.Signing any of our [installer].exe files with the AzureSignTool.exe via command line, for example, is no problem. However, when it comes to signing any of our Windows application packages (*.msix), the signing fails (via VS or via command line). Signing via VSWhen we publish our app in VS, we can select our certificate directly from Azure: Signing then fails with the following error message: What is noticeable is that Identity.Publisher in the Package.manifest file has been replaced with: “CN=GlobalSign GCC R45 EV CodeSigning CA 2020, O=GlobalSign nv-sa, C=BE”. This string matches the Issuer of our certificate, but not the subject. Is that how it’s intended? Nevertheless, when we disable automatic signing with (AppxPackageSigningEnabled = false) and replace Identity.Publiser with the subject of our certificate (“CN=3D.aero GmbH, , ST=Hamburg, …”), validation fails with error message: “Validation error. error C00CE169: App manifest validation error: The app manifest must be valid as per schema: [..]”. We have found out that the RegEx does not allow an ST key during validation. So we replaced it with S and building completes. Signing via command lineHowever, if we then try to sign the built MSIX package with the AzureSignTool.exe via command line, this fails with the following error message:”The Publisher Identity in the AppxManifest.xml does not match the subject on the certificate.”We assume this is due to the ST <> S problem?When using “CN=GlobalSign GCC R45 EV CodeSigning CA 2020, O=GlobalSign nv-sa, C=BE” manually, AzureSignTool also fails with the same message.QuestionsWhy does signing fail if the certificate was automatically selected via the wizard?his string matches the Issuer of our certificate, but not the subject. Is that how it’s intended?The RegEx for the validation of Identity.Publisher does not allow ST, can this be fixed? Read More
Current issues we are facing with SharePoint Features rolling out
This is more of an awareness post – but felt it important to share some of the key items we are facing with many new features Microsoft are rolling out that often are not ready for production, badly designed, have legal and compliance implications, and more often than not, no way to disable them (forced on us).
I’ve submitted a number of UserVoice posts outlining the most significant ones that I feel would relate to most users/companies using SharePoint.
New Co-Authoring Feature REMOVES Discard ChangesLegal and Compliance Issue: Customize your individual sites and experiences with fonts and themes [MC905757]Management (Governance) for Copilot Agents in SharePoint (Enable/Disable)SharePoint brand center Feedback and Feature Requests Disable new SharePoint Pages feature from SharePoint Start Experience (M365 Roadmap ID:124824) Link on a Page to a PDF should return the user to the Page when they close the PDF, not be redirected to the library the PDF livesOpen Office documents (Word, Excel, PowerPoint) in Read only mode by default
This is more of an awareness post – but felt it important to share some of the key items we are facing with many new features Microsoft are rolling out that often are not ready for production, badly designed, have legal and compliance implications, and more often than not, no way to disable them (forced on us). I’ve submitted a number of UserVoice posts outlining the most significant ones that I feel would relate to most users/companies using SharePoint. New Co-Authoring Feature REMOVES Discard ChangesLegal and Compliance Issue: Customize your individual sites and experiences with fonts and themes [MC905757]Management (Governance) for Copilot Agents in SharePoint (Enable/Disable)SharePoint brand center Feedback and Feature Requests Disable new SharePoint Pages feature from SharePoint Start Experience (M365 Roadmap ID:124824) Link on a Page to a PDF should return the user to the Page when they close the PDF, not be redirected to the library the PDF livesOpen Office documents (Word, Excel, PowerPoint) in Read only mode by default Read More
Latest Edge Dev crashing
On the latest Dev Edge (.2903.5) if I try to drag & drop
a link from the Favs Bar Edge just closes.
There is no error showing in Reliability Monitor.
It works fine if I right click and open in a new Tab.
On the latest Dev Edge (.2903.5) if I try to drag & dropa link from the Favs Bar Edge just closes.There is no error showing in Reliability Monitor.It works fine if I right click and open in a new Tab. Read More