Category: Microsoft
Category Archives: Microsoft
Open to Work: How to Get Ahead in the Age of AI
Today is the day. Open to Work: How to Get Ahead in the Age of AI is officially available!
At a time when technology dominates the headlines, the conversation I see most often on LinkedIn is deeply human: what does AI mean for my job and my career?
And that makes sense. Careers once felt more predictable. Titles defined what you did. Progress looked like a ladder. That model has been evolving for years, but AI is accelerating the shift.
The most important truth about this moment is that the outcome isn’t written yet. The new world of work is being assembled right now, task by task, policy by policy, business by business. It will reflect the choices of the people who show up to build it.
That’s why Aneesh Raman and I wrote this book.
Open to Work is a practical guide informed by what we see across the global labor market and insight into the tools millions of people use every day. It’s for every person asking what comes next for their job, their career, their company or their community.
With help from experts and everyday LinkedIn members, it shows you how to engage with AI before you have to, how to adapt by focusing on what you can control and how to become irreplaceable by leaning into what makes you uniquely you.
And those ideas don’t just apply to individuals, they guide how we as Microsoft and LinkedIn are building for this moment. At the intersection of how work gets done and how careers get built, our shared goal is to connect people to opportunity and turn the tools they use every day into a canvas for human and AI collaboration at scale. Done right, that’s how AI expands opportunity and helps people build confidence and momentum in their careers.
We’ve always believed technology should serve people. AI should help humans. Not the other way around. That doesn’t happen by accident. It happens when we all decide to make it true.
If you want to go deeper on Open to Work, listen to my conversation with Microsoft President and Vice Chair Brad Smith on his Tools and Weapons podcast.
Open to Work is available now at linkedin.com/opentowork.
Ryan Roslansky is the CEO of LinkedIn and Executive Vice President of Microsoft Office, where he leads engineering for products like Word, Excel, PowerPoint and Copilot. Through these roles, Ryan is shaping where work goes next to unleash greater economic opportunity for the global workforce.
The post Open to Work: How to Get Ahead in the Age of AI appeared first on The Official Microsoft Blog.
Today is the day. Open to Work: How to Get Ahead in the Age of AI is officially available! At a time when technology dominates the headlines, the conversation I see most often on LinkedIn is deeply human: what does AI mean for my job and my career? And that makes sense. Careers once felt…
The post Open to Work: How to Get Ahead in the Age of AI appeared first on The Official Microsoft Blog.Read More
Announcing Copilot leadership update
Satya Nadella, Chairman and CEO, and Mustafa Suleyman, Executive Vice President and CEO of Microsoft AI, shared the below communications with Microsoft employees this morning.
SATYA NADELLA MESSAGE
I want to share two org changes we’re making to our Copilot org and superintelligence effort.
It’s clear a new era of productivity is emerging as AI experiences rapidly evolve from answering questions and suggesting code, to executing multi-step tasks with clear user control points. You see this in our announcements over the last couple of weeks, like Copilot Tasks and Copilot Cowork, agentic capabilities in Office, and Agent 365. As these experiences connect more naturally across agents, apps, and workflows, we have an opportunity to help customers spend more time on higher-value work and reduce manual coordination, while providing people with more agency and empowerment and organizations with the governance and security controls they need.
To that end, we are bringing the Copilot system across commercial and consumer together as one unified effort. This will span four connected pillars: Copilot experience, Copilot platform, Microsoft 365 apps, and AI models. This is how we move from a collection of great products to a truly integrated system, one that is simpler and more powerful for customers.
Jacob Andreou will lead the Copilot experience across consumer and commercial, driving design, product, growth, and engineering, as EVP, Copilot, reporting to me. As CVP of Product and Growth at Microsoft AI, Jacob has accelerated our user-focused AI-first product making and growth framework. Prior to that, he was SVP at Snap, where he helped scale the company from its early days.
Progress at the AI model layer is more critical than ever to our success as a company over the next decade and is foundational to everything we build above it. We are doubling down on our superintelligence mission with the talent and compute to build models that have real product impact, in terms of evals, COGS reduction, as well as advancing the frontier when it comes to meeting enterprise needs and achieving the next set of research breakthroughs. Mustafa Suleyman and I have been working towards this plan for some time, and he will continue to lead this high ambition work, reporting to me. Mustafa is uniquely qualified to drive this forward, with his deep focus and commitment to advancing the frontiers of model science, while also ensuring that human control, agency, and economic opportunity remain at the center of these advancements.
Ryan Roslansky, Perry Clarke, and Charles Lamanna will lead M365 apps and the Copilot platform. Together, Jacob, Ryan, Charles, Perry, and Mustafa make up the Copilot LT and over the next few weeks they’ll work to align the teams.
Our org boundaries will simply reflect system architecture and product shape such that we can deliver more coherent and competitive experiences that continue to evolve with model capabilities. And I am looking forward to how together we apply all of this to empower people, organizations, and the world.
MUSTAFA SULEYMAN MESSAGE
Subject: A new structure for Microsoft AI
Technology and the future of our industry will be defined by two things: frontier models, and the products through which they are experienced. For some time, I’ve been thinking about how we best tackle these huge challenges, and today I’m excited to be evolving our structure at Microsoft AI, ensuring we’re positioned to succeed in both.
I came to Microsoft with an overriding mission: to create Superintelligence that delivers a transformative, positive impact for millions of people. This requires us to build frontier models, at scale, pushing the boundaries of what’s possible. Everything else follows from this. It’s the foundation for our future as a company. With our ambitious, long-term frontier scale compute roadmap locked, we now have everything we need to build truly SOTA models.
As you will have just heard from Satya, the next phase of this plan is to restructure our organization to enable me to focus all my energy on our Superintelligence efforts and be able to deliver world class models for Microsoft over the next 5 years. These models will enable us to build enterprise tuned lineages that help improve all our products across the company. They’ll also enable us to deliver the COGS efficiencies necessary to be able to serve AI workloads at the immense scale required in the coming years. Achieving all this will be a huge challenge, and I’m committing everything we have – and I have personally – to make it happen.
To that end, I’ve been working hard with other leaders in the background for a while now to define a strategy to unify Copilot by bringing together the Consumer and Commercial efforts as one. We all know this makes sense. Every user – whether at home or at work – will be able to enjoy the full benefit of what we are all building. Today, we’re combining these organizations into a single, unified Copilot org. Jacob has demonstrated himself to be an outstanding leader for the product experience and clearly has the product instincts, the operational range, and the conviction to make Copilot a great success.
Jacob will retain a dotted line to me, and I’ll stay directly involved in much of the day-to-day operation of MAI, attending Meetups, MMMs, LT, and supporting Jacob to drive all areas of product strategy. To ensure that the models we build and the products we ship are mutually reinforcing, we are establishing a Copilot Leadership Team that includes me, Jacob, Charles Lamanna, Perry Clarke, and Ryan Roslansky. This will enable us to focus our brand strategy, our product roadmap, our models and our core infrastructure as one to deliver the best experiences possible for all our users.
Thank you for everything you’ve done over the last few years. I know how hard everyone has been pushing and the sacrifices many of you have made to help the company adapt to this new era.
We really do have an incredible opportunity to redefine Microsoft for this agentic revolution.
Mustafa’s mail has been edited slightly for external use.
The post Announcing Copilot leadership update appeared first on The Official Microsoft Blog.
Satya Nadella, Chairman and CEO, and Mustafa Suleyman, Executive Vice President and CEO of Microsoft AI, shared the below communications with Microsoft employees this morning. SATYA NADELLA MESSAGE I want to share two org changes we’re making to our Copilot org and superintelligence effort. It’s clear a new era of productivity is emerging as AI experiences…
The post Announcing Copilot leadership update appeared first on The Official Microsoft Blog.Read More
Microsoft at NVIDIA GTC: New solutions for Microsoft Foundry, Azure AI infrastructure and Physical AI
Microsoft combines accelerated computing with cloud scale engineering to bring advanced AI capabilities to our customers. For years, we’ve worked with NVIDIA to integrate hardware, software and infrastructure to power many of today’s most important AI breakthroughs.
What’s new at NVIDIA GTC
- Expanded Microsoft Foundry capabilities to build, deploy and operate production-ready AI agents on NVIDIA accelerators and open NVIDIA Nemotron models
- New Azure AI infrastructure optimized for inference-heavy, reasoning-based workloads, including the first hyperscale cloud to power on next-generation NVIDIA Vera Rubin NVL72 systems
- Deeper integration across Microsoft Foundry, Microsoft Fabric and NVIDIA Omniverse libraries and open frameworks to support Physical AI systems from simulation to real‑world operations
From Frontier models to production-ready agents
At the foundation of this system is Microsoft Foundry: serving as the operating system for building, deploying and operating AI at enterprise scale. Foundry builds on Azure to bring together models, tools, data and observability into a single system designed for production agents. Today we’re expanding those capabilities across Foundry Agent Service and NVIDIA Nemotron models.
The next-generation Foundry Agent Service and Observability in Foundry Control Plane are now generally available, enabling organizations to build and operate AI agents at production scale. Foundry Agent Service allows teams to quickly develop agents that reason, plan and act across tools, data and workflows. Once created, Foundry Control Plane provides the developer end-to-end visibility into agent behavior, unlocking both developer productivity as well as enterprise trust. Companies such as Corvus Energy are already using Foundry to replace manual inspection workflows with agent-driven operational intelligence across their global fleet.
We are further simplifying the path from prototype to production with the availability of Voice Live API integration with Foundry Agent Service, in public preview, which enables developers to build voice-first, multimodal, real-time agentic experiences. This pairs with the general availability of a refreshed Microsoft Foundry portal and expanded integrations for Palo Alto Networks’ Prisma AIRS and Zenity, delivering deeper builder experiences and runtime security across the entire agent lifecycle.
NVIDIA Nemotron models are also now available through Microsoft Foundry, joining the widest selection of models on any cloud, including the latest reasoning, frontier and open models. This bolsters our recent partnership announcement bringing Fireworks AI to Microsoft Foundry, enabling customers to fine-tune open-weight models like NVIDIA Nemotron into low-latency assets that can be distributed to the edge.
Scaling AI infrastructure for the world’s most demanding workloads
Inference AI workloads are reshaping cost, performance and system design requirements. To operationalize agentic AI at scale, customers need purpose-built infrastructure for inference‑heavy, reasoning‑based workloads that can be deployed and operated consistently across global and regulated environments.
Microsoft’s AI infrastructure approach is engineered to seamlessly bring next-generation NVIDIA systems into Azure datacenters that are designed for power, cooling networking and rapid generational upgrades. This allows our customers to move with speed and agility and stay at the leading edge from generation to generation.
In less than a year, we’ve deployed hundreds of thousands of liquid-cooled Grace Blackwell GPUs across our global datacenter footprint, and now we are excited to be the first hyperscale cloud to power on NVIDIA’s newest Vera Rubin NVL72 in our labs. Over the next few months, Vera Rubin NVL72 will be rolled out into our modern, liquid-cooled Azure datacenters.
Microsoft’s infrastructure innovation with NVIDIA also extends to sovereign and regulated environments to give customers control of both where AI runs and how it evolves over time. Recently, we announced Foundry Local support for modern infrastructure and large AI models, and today we now have initial support for NVIDIA Vera Rubin platform on Azure Local, extending accelerated AI capabilities to customer-controlled environments. This approach allows organizations to plan for next-generation AI workloads, including reasoning-based and agentic systems, while maintaining Azure-consistent operations, governance and security through our unified software layer with Azure Arc and Foundry Local.
Bringing AI into the physical world
As AI moves beyond digital experiences, Microsoft and NVIDIA are collaborating to support the next wave of Physical AI. At GTC, this work centers on NVIDIA Physical AI Data Factory Blueprint, with Microsoft Foundry as the platform for hosting and operating Physical AI systems on Azure at cloud scale.
By integrating this blueprint with Azure services as part of a Physical AI Toolchain, Microsoft enables developers to build, train and operate physical AI and robotics workflows that connect physical assets, simulation and cloud training environments into repeatable, enterprise-grade pipelines. To support, we are introducing a public Azure Physical AI Toolchain GitHub repository integrated with the Nvidia Physical AI Data Factory and with core Azure services.
To further the impact of AI in real‑world, physical environments, today Microsoft and NVIDIA are deepening the integration between Microsoft Fabric and NVIDIA Omniverse libraries, connecting live operational data with physically accurate digital twins and simulation. This allows organizations to see what’s happening across their physical systems, understand it in real time and use AI to decide what to do next. In practice, customers in manufacturing and operations and beyond are using this approach to move beyond dashboards and alerts to coordinated, AI‑driven action across machines, facilities and workflows.
From innovation to impact
Microsoft is delivering reliable, production‑scale AI by bringing together its global AI infrastructure, platforms and real‑world systems with the latest innovation from NVIDIA. For customers, this means the ability to operate intelligence continuously, running inference-heavy, reasoning-based and physical AI workloads with the performance, security and governance required for real businesses and regulated industries.
Whether powering always-on agents, scaling next-generation AI infrastructure or deploying intelligent systems in factories, energy facilities and sovereign environments, Microsoft and Nvidia are helping customers move faster from insight to action.
Yina Arenas leads product strategy and execution for Microsoft Foundry, overseeing the end–to–end AI product portfolio, infrastructure, developer experiences and foundation model integration across OpenAI, Anthropic, Mistral, DeepSeek and others. She delivers an enterprise ready, production grade AI platform trusted by global customers for secure, reliable and scalable AI.
The post Microsoft at NVIDIA GTC: New solutions for Microsoft Foundry, Azure AI infrastructure and Physical AI appeared first on The Official Microsoft Blog.
Microsoft combines accelerated computing with cloud scale engineering to bring advanced AI capabilities to our customers. For years, we’ve worked with NVIDIA to integrate hardware, software and infrastructure to power many of today’s most important AI breakthroughs. What’s new at NVIDIA GTC Expanded Microsoft Foundry capabilities to build, deploy and operate production-ready AI agents on…
The post Microsoft at NVIDIA GTC: New solutions for Microsoft Foundry, Azure AI infrastructure and Physical AI appeared first on The Official Microsoft Blog.Read More
Microsoft announces Experiences + Devices leadership changes
Rajesh Jha, Executive Vice President, Experiences + Devices, and Satya Nadella, Chairman and CEO, shared the below communications with Microsoft employees this morning.
RAJESH JHA MESSAGE
Subject: Organizing for the future and my transition
Dear Team,
After 35+ years at Microsoft, I am moving into retirement. I will transition out on July 1st and then stay in an advisory role.
Satya and I have been working on succession for some time. I am incredibly confident and excited about the future with Perry Clarke, Charles Lamanna, Pavan Davuluri, and Ryan Roslansky as EVP direct reports to Satya.
I am also excited to announce the promotions of Jeff Teper to EVP, and Sumit Chauhan and Kirk Koenigsbauer to President.
Please join me in congratulating these leaders on this well-deserved recognition.
We’re announcing these top-level changes today, and between now and June, my leadership team and I will work together to finalize the full cascade of details needed in this kind of transition. This includes aligning operating rhythms, decision ownership, and details on the future org structure, all so we’ll be fully aligned and ready to run at the start of FY27. Our intent in taking this approach is to minimize changes and not lose the great momentum we have.
Our priorities around SFI, QEI, and Copilot remain unchanged – let’s keep the intensity here.
I want to add that working with you all over the years, in the service of customers, has been an incredible privilege for me. I am deeply grateful.
Best regards,
Rajesh
SATYA NADELLA MESSAGE
Rajesh has been a constant throughout my entire life at Microsoft. From our earliest days working together, I have admired his unwavering commitment to his team, to our customers, to the products we build, and to the company. I have always been struck by his operational rigor, his ability to make the hard strategic calls, lead through the grind, and emerge stronger on the other side. That, to me, is what true leadership looks like.
When I think about the pantheon of leaders who have truly shaped this company, Rajesh stands firmly among them. He embodies the commitment that helped build and transform Microsoft into the company it is today, and it is on the strength of that foundation that we will continue to move forward.
And as we look to the future, the opportunity ahead is expansive. We have the depth of talent, the product ethos, and a clear sense of purpose as a company to ensure our technology advancements accrue to our mission of empowerment.
Rajesh – I am deeply grateful for all you have done for Microsoft and our customers and for all you have taught me personally. On behalf of all of us – Thank You.
Satya
The post Microsoft announces Experiences + Devices leadership changes appeared first on The Official Microsoft Blog.
Rajesh Jha, Executive Vice President, Experiences + Devices, and Satya Nadella, Chairman and CEO, shared the below communications with Microsoft employees this morning. RAJESH JHA MESSAGE Subject: Organizing for the future and my transition Dear Team, After 35+ years at Microsoft, I am moving into retirement. I will transition out on July 1st and then stay in an…
The post Microsoft announces Experiences + Devices leadership changes appeared first on The Official Microsoft Blog.Read More
Introducing the First Frontier Suite built on Intelligence + Trust
Today Microsoft is announcing:
- Wave 3 of Microsoft 365 Copilot
- Expanded model diversity with Claude and next-gen OpenAI models available today
- General availability of Agent 365 on May 1 for $15 per user
- General availability of the new Microsoft 365 E7: The Frontier Suite on May 1 for $99 per user1
Frontier Transformation is a holistic reimagining of business, aligning AI with human ambition to achieve an organization’s highest aspirations. It is the next evolution of AI Transformation — not only do we need to deliver efficiency and productivity, but we need to democratize intelligence and do more for humanity. Companies do not want or need more AI experimentation. They need AI that delivers real business outcomes and growth.
In my daily conversations with customers and partners, they typically question what the most important components of an AI solution are. Is it the model? Is it silicon? At Microsoft, we believe the two most essential elements of Frontier Transformation are Intelligence + Trust. Organizations need to harness their own unique work intelligence as they build agents and solutions; and all AI artifacts across their technology stack must be observed, managed and secured to ensure they are delivering value responsibly.
Intelligence that shows up in real work
I often say that zero-shot artifact creation is nothing more than a parlor trick. Models can reason over data, produce draft documents, presentations and spreadsheets, but they do not understand work. Real differentiation comes from intelligence — deep work context, embedded in the tools people already use. AI should amplify your intelligence but do so in a manner that protects your differentiation and unique value.
Work IQ amplifies an individual’s IQ by tapping into your organization’s IQ. It is the intelligence layer that enables Microsoft 365 Copilot and agents to know how you work, with whom you work, and the content upon which you collaborate. That is why Copilot is faster, more accurate and more trusted than solutions built on models and connectors alone.
This month, we are unleashing Work IQ with our next generation of agentic experiences in Wave 3 of Microsoft 365 Copilot in Word, Excel, PowerPoint and Outlook. Employees will have an enhanced chat experience in Copilot with the ability to create and augment artifacts, and the power to build their own agents within the canvas they work in every day.
Microsoft 365 Copilot is model diverse by design. Rather than betting on a single model, we built a system that makes every model useful at work. Customers get the choice, performance and flexibility in an open, heterogenous environment. Copilot leverages leading models from OpenAI and Anthropic, operating openly across clouds and data services without locking customers in. Claude is now available in mainline chat in Copilot via the Frontier program, alongside the latest generation of OpenAI models.
Microsoft 365 Copilot Wave 3 is not just a singular release of new capabilities but rather a commitment to continuous innovation. We will bring frontier capabilities with enterprise promises for our customers in an open and model diverse manner. Another great example of this is Copilot Cowork, which is in research preview. Built in close collaboration with Anthropic, we are bringing the technology that powers Claude Cowork into Microsoft 365 Copilot to enable long-running, multi-step work that unfolds over time. Learn about our Wave 3 news.
These announcements come as our customers across industries are already seeing the value of Microsoft 365 Copilot. Microsoft recently delivered its strongest quarter yet with Copilot, with paid seats growing more than 160% year over year and daily active usage up ten times, as customers increasingly make Copilot a core part of everyday work. Expansion is also accelerating as the number of customers deploying Copilot at significant scale — more than 35,000 seats — tripled year over year. Just last week, Mercedes Benz announced a global rollout of Microsoft 365 Copilot, following recent investments from NASA, Fiserv, ING, the University of Kentucky, the University of Manchester, the U.S. Department of the Interior and Westpac. This is in addition to the 90 percent of the Fortune 500 who now use Copilot.
Trust: from agent experimentation and sprawl to enterprise control
The speed of agent development and proliferation tells us customers see value, but without guardrails the pace of adoption turns into blind spots, diminished ROI and real security risk. As AI agents become more capable and autonomous, trust is nonnegotiable. IDC predicts 1.3B agents in circulation by 2028, and 80% of the Fortune 500 are already using Microsoft agents, led by operationally complex industries like manufacturing, financial services and retail.
That is why I am excited to announce the May 1 general availability of Microsoft Agent 365, the control-plane for AI agents. Priced at $15 per user, Agent 365 gives IT and security leaders a single place to observe, govern, manage and secure agents across the organization — using the same infrastructure, applications and protections they rely on to manage people today.
We are seeing tremendous momentum with our preview customers. In just two months, tens of millions of agents have appeared in the Agent 365 Registry. We have tens of thousands of customers that are already adopting Agent 365 to securely govern and scale AI agents across enterprise workflows.
At Microsoft, we are also using Agent 365 as Customer Zero and the early signals are clear. We now have visibility into more than 500,000 agents across the company with the most widely used focused on research, coding, sales intelligence, customer triage and HR self-service. That adoption is translating into real work. Over the past 28 days alone, agents have been generating more than 65,000 responses every day for employees. This is evidence that we are not simply experimenting, we are embedding agents in the flow of everyday work and empowering human ambition.
Introducing the Frontier Suite
To meet this demand, I am thrilled to announce we are bringing Intelligence + Trust together with Microsoft 365 E7: The Frontier Suite. Microsoft 365 E7 unifies Microsoft 365 E5, Microsoft 365 Copilot and Agent 365 into a single solution powered by Work IQ and integrated with the apps and security stack customers already rely on. It includes Microsoft Entra Suite and advanced Defender, Intune and Purview security capabilities, delivering comprehensive protection across agents and employees.
Customers have told us E5 alone is no longer enough; they do not want multiple tools stitched together, they want one trusted solution. At $99 per user, E7 is priced below purchasing these capabilities à la carte, giving customers a simpler, more cost-effective way to deploy enterprise AI at scale.
With the general availability of Agent 365 and the latest agentic experiences in Microsoft 365 Copilot offered as one Frontier suite, AI moves from experimentation to durable, enterprise-wide value, built on a foundation of Intelligence + Trust. This is how we make Frontier Transformation real. Microsoft is not just imagining the future of AI, we are empowering organizations across industries and around the world to build it.
1Microsoft 365 E7 is available with and without Teams.
The post Introducing the First Frontier Suite built on Intelligence + Trust appeared first on The Official Microsoft Blog.
Today Microsoft is announcing: Wave 3 of Microsoft 365 Copilot Expanded model diversity with Claude and next-gen OpenAI models available today General availability of Agent 365 on May 1 for $15 per user General availability of the new Microsoft 365 E7: The Frontier Suite on May 1 for $99 per user1 Frontier Transformation is a…
The post Introducing the First Frontier Suite built on Intelligence + Trust appeared first on The Official Microsoft Blog.Read More
Microsoft and OpenAI joint statement on continuing partnership
Since 2019, Microsoft and OpenAI have worked together to advance artificial intelligence responsibly and make its benefits broadly accessible. What began as a research partnership has grown into one of the most consequential collaborations in technology — grounded in mutual trust, deep technical integration, and a long‑term commitment to innovation.
As conversations around AI investments and partnerships grow and as OAI announces new funding and new partners as they did today, we want to ensure these announcements are understood within the existing construct of our partnership. Nothing about today’s announcements in any way changes the terms of the Microsoft and OpenAI relationship that have been previously shared in our joint blog in October 2025.
The partnership remains strong and central. Microsoft and OpenAI continue to work closely across research, engineering, and product development, building on years of deep collaboration and shared success.
Our IP relationship continues unchanged. Microsoft maintains its exclusive license and access to intellectual property across OpenAI models and products. Collaborations like the partnership between OpenAI and Amazon were always contemplated under our agreements and Microsoft is excited to see what they build together.
Our commercial and revenue share relationship remains unchanged. The ongoing revenue share arrangement remains unchanged and has always included sharing revenue from partnerships between OpenAI and other cloud providers.
Azure remains the exclusive cloud provider of stateless OpenAI APIs. Microsoft is the exclusive cloud provider for stateless APIs that provide access to OpenAI’s models and IP. These APIs can be purchased from Microsoft or directly from OpenAI. Customers and developers benefit from Azure’s global infrastructure, security, and enterprise-grade capabilities at scale. Any stateless API calls to OpenAI models that result from a collaboration between OpenAI and any third party – including Amazon – would be hosted on Azure.
OpenAI’s first party products, including Frontier, will continue to be hosted on Azure.
AGI definition and processes are unchanged. The contractual definition of AGI and the process for determining if it has been achieved remains the same.
The partnership supports OpenAI’s growth. As OpenAI scales, it continues to have flexibility to commit to additional compute elsewhere, including through large-scale infrastructure initiatives such as the Stargate project.
The partnership was designed to give Microsoft and OpenAI room to pursue new opportunities independently, while continuing to collaborate, which each company is doing, together and independently.
We remain committed to our partnership and to the shared mission that brought us together. We continue to work side‑by‑side to deliver powerful AI tools, advance responsible development, and ensure that AI benefits people and organizations everywhere.
The post Microsoft and OpenAI joint statement on continuing partnership appeared first on The Official Microsoft Blog.
Since 2019, Microsoft and OpenAI have worked together to advance artificial intelligence responsibly and make its benefits broadly accessible. What began as a research partnership has grown into one of the most consequential collaborations in technology — grounded in mutual trust, deep technical integration, and a long‑term commitment to innovation. As conversations around AI investments and partnerships…
The post Microsoft and OpenAI joint statement on continuing partnership appeared first on The Official Microsoft Blog.Read More
Microsoft Sovereign Cloud adds governance, productivity and support for large AI models securely running even when completely disconnected
As digital sovereignty becomes a strategic requirement, organizations are rethinking how they deploy critical infrastructure and AI capabilities under tighter regulatory expectations and higher risk conditions. Microsoft’s approach to sovereignty is grounded in enabling enterprises, public sectors and regulated industries to participate in the digital economy securely, independently and on their own terms. The Microsoft Sovereign Cloud brings together productivity, security and cloud workloads to span both public and private environments. Customers can choose the right control posture for each workload, through a continuum of sovereign options protecting against fragmenting their architecture or increasing operational risk. Trust is built on confidence: confidence that data stays protected, controls are enforceable and operations can continue under real-world conditions.
To support these confidential environments, Microsoft offers full stack capabilities that support customers across connected, intermittently connected and fully disconnected modes. Today’s expansion of capabilities includes three major updates:
- Azure Local disconnected operations (now available) – Organizations can now run mission-critical infrastructure with Azure governance and policy control, with no cloud connectivity, optimizing continuity for sovereign, classified or isolated environments.
- Microsoft 365 Local disconnected (now available) – Core productivity workloads, Exchange Server, SharePoint Server and Skype for Business Server can run fully inside the customer’s sovereign operational boundary on Azure Local, keeping teams productive even when disconnected from the cloud.
- Foundry Local adds modern infrastructure capabilities and support for large AI models – Organizations can now bring large AI models into fully disconnected, sovereign environments with Foundry Local. Using modern infrastructure from partners like NVIDIA, customers with sovereign needs will now be able to run multimodal models locally on their own hardware, inside strict sovereign boundaries enabling powerful, local AI inferencing in fully disconnected environments.

This delivers a truly localized full stack experience built on Azure Local infrastructure and Microsoft 365 Local workloads, designed to stay resilient across any connectivity condition, with large models being part of Foundry Local extending the stack to run advanced multimodal models locally, securely, even when fully disconnected. Customers can now help maintain uninterrupted operations, keep mission critical workloads protected and apply consistent governance and policy enforcement, while keeping data, identities and operations within their sovereign boundaries.
Azure Local runs critical infrastructure locally, even when disconnected
For workloads with specialized requirements, Azure Local provides the on-premises foundation with consistent Azure governance and policy controls. With Azure Local disconnected operations, management, policy and workload execution stay within the customer-operated environments, so services continue running securely even when environments must be isolated or connectivity is not available. Using familiar Azure experiences and consistent policies, organizations can deploy and govern workloads locally without depending on continuous connection to public cloud services. Azure Local is designed to scale with mission-critical needs from smaller deployments to larger footprints that support data-intensive and AI-driven workloads. Customers can start fast, expand over time and maintain a unified operational model, all within their sovereign boundary.
Operating in disconnected environments surfaces constraints that go beyond traditional cloud assumptions: External dependencies may be unacceptable, connectivity may be intentionally restricted and operational continuity is a business imperative.
“The availability of Azure Local disconnected operations represents a breakthrough for organizations that need control over their data without sacrificing the power of the Microsoft Cloud. For Luxembourg, where digital sovereignty is not just a principle but a strategic necessity, this model offers the resilience, autonomy and trust our market expects. By combining Microsoft’s technological leadership with Proximus NXT’s sovereign cloud expertise, we are enabling our customers to innovate confidently — even in fully disconnected mode,” said Gerard Hoffmann, CEO Proximus Luxembourg.
Microsoft 365 Local keeps productivity and collaboration available in fully disconnected environments
As sovereign environments move into disconnected environments, keeping people productive becomes just as critical as keeping infrastructure online. Building on more than a decade of delivering and supporting these services, Microsoft 365 Local disconnected brings that continuity to the productivity layer, delivering Microsoft’s core server workloads — Exchange Server, SharePoint Server and Skype for Business Server supported through at least 2035 — directly into the customer’s sovereign private cloud.
With Microsoft 365 Local, teams can communicate, share information and collaborate securely within the same controlled boundary as their infrastructure and AI workloads. Everything runs locally, under customer-owned policies, with full control of data resiliency, access and compliance. By operating with Azure-consistent management and governance, customers get the productivity experience they rely on, designed to stay resilient and secure even when offline.
Bringing large models and modern infrastructure to Foundry Local
With the availability of larger models and modern infrastructure as part of the Foundry Local portfolio, Microsoft is enabling customers with highly secure environments the ability to run multimodal, large models directly inside their sovereign private cloud environments. This brings the richness of Microsoft’s enterprise AI capabilities to on-premises systems, complete with local inferencing and APIs that operate completely within customer-controlled data boundaries.
Expanding beyond small models, the integration of Foundry Local with Azure Local is specifically designed to support large-scale models utilizing the latest GPUs from partners such as NVIDIA. Microsoft will provide comprehensive support for deployments, updates and operational health. Even as inferencing demands increase over time, customers retain complete control over their data and hardware.
Choice and control without added complexity
Customers facing strict sovereignty and regulatory requirements are clear that a fully disconnected sovereign private cloud is a key business need. Microsoft Sovereign Private Cloud is designed to meet these needs head-on, enabling secure, compliant operations even in environments with no external connectivity. At the same time, we recognize that disconnected environments are not one-size-fits-all; some customers operate across connected, hybrid and disconnected modes based on mission, risk and regulation. Our approach helps customers to meet strict sovereign requirements in fully disconnected scenarios without compromising simplicity, while retaining flexibility where connectivity is possible. Together, Azure Local disconnected operations, Microsoft 365 Local and Foundry Local help organizations choose where workloads run and how environments are managed, while standardizing governance and operational practices across connected and disconnected deployments.
Get started
- Azure Local disconnected operations and Microsoft 365 Local disconnected are now available worldwide, and large models on Foundry Local are available to qualified customers.
- Explore the Microsoft Sovereign Cloud
- Learn more about Azure Local disconnected operations
Douglas Phillips leads global engineering efforts for Microsoft’s specialized, sovereign, and private clouds. He is responsible for Microsoft’s global strategy, products and operations that bring Microsoft’s industry-leading solutions, including Azure, our adaptive cloud portfolio and Microsoft 365 collaboration suite, to customers with additional sovereignty, security, edge and compliance requirements.
The post Microsoft Sovereign Cloud adds governance, productivity and support for large AI models securely running even when completely disconnected appeared first on The Official Microsoft Blog.
As digital sovereignty becomes a strategic requirement, organizations are rethinking how they deploy critical infrastructure and AI capabilities under tighter regulatory expectations and higher risk conditions. Microsoft’s approach to sovereignty is grounded in enabling enterprises, public sectors and regulated industries to participate in the digital economy securely, independently and on their own terms. The Microsoft Sovereign Cloud brings together productivity, security and cloud workloads to span both…
The post Microsoft Sovereign Cloud adds governance, productivity and support for large AI models securely running even when completely disconnected appeared first on The Official Microsoft Blog.
Read More
Asha Sharma named EVP and CEO, Microsoft Gaming
Satya Nadella, Chairman and CEO, and members of his executive team shared the following communications with employees today.
SATYA NADELLA MESSAGE
Gaming has been part of Microsoft from the start. Flight Simulator shipped before Windows, and you can practically ray‑trace a line from DirectX in the ’90s to the accelerated‑compute era we’re in today.
As we celebrate Xbox’s 25th year, the opportunity and innovation agenda in front of us is expansive. Today we reach over 500 million monthly active users, are a top publisher across all platforms, and continue to innovate across gaming hardware, content, and community, in service of creators and players everywhere.
I am long on gaming and its role at the center of our consumer ambition, and as we look ahead, I’m excited to share that Asha Sharma will become Executive Vice President and CEO, Microsoft Gaming, reporting to me. Over the last two years at Microsoft, and previously as Chief Operating Officer at Instacart and a Vice President at Meta, Asha has helped build and scale services that reach billions of people and support thriving consumer and developer ecosystems. She brings deep experience building and growing platforms, aligning business models to long-term value, and operating at global scale, which will be critical in leading our gaming business into its next era of growth.
Matt Booty will become Executive Vice President and Chief Content Officer, reporting to Asha. Matt’s career reflects a lifelong commitment to games and to the people who make them. Under his leadership, Microsoft Gaming has grown to span nearly 40 studios across Xbox, Bethesda, Activision Blizzard, and King, which are home to beloved franchises including Halo, The Elder Scrolls, Call of Duty, World of Warcraft, Diablo, Candy Crush, and Fallout.
Together, Asha and Matt have the right combination of consumer product leadership and gaming depth to push our platform innovation and content pipeline forward. Last year, Phil Spencer made the decision to retire from the company, and since then we’ve been talking about succession planning. I want to thank Phil for his extraordinary leadership and partnership. Over 38 years at Microsoft, including 12 years leading Gaming, Phil helped transform what we do and how we do it. He expanded our reach across PC, mobile, and cloud; nearly tripled the size of the business; helped shape our strategy through the acquisitions of Activision Blizzard, ZeniMax, and Minecraft; and strengthened our culture across our studios and platforms. I’ve long admired Phil’s unwavering commitment to players, creators, and his team, and I am personally grateful for his leadership and counsel. He will continue working closely with Asha to ensure a smooth transition.
We have extraordinary creative talent across our studios and a global platform that is second to none. I’m excited for how we will capture the opportunity ahead and define what comes next, while staying grounded in what players and creators value.
Please join me in congratulating Asha and Matt on their new roles, and in thanking Phil for everything he has done for Microsoft and for our industry.
PHIL SPENCER MESSAGE
When I walked through Microsoft’s doors as an intern in June of 1988, I could never have imagined the products I’d help build, the players and customers we’d serve, or the extraordinary teams I’d be lucky enough to join. It’s been an epic ride and truly the privilege of a lifetime.
Last fall, I shared with Satya that I was thinking about stepping back and starting the next chapter of my life. From that moment, we aligned on approaching this transition with intention, ensuring stability, and strengthening the foundation we’ve built. Xbox has always been more than a business. It’s a vibrant community of players, creators, and teams who care deeply about what we build and how we build it. And it deserves a thoughtful, deliberate plan for the road ahead.
Today marks an exciting new chapter for Microsoft Gaming as Asha Sharma steps into the role of CEO, and I want to be the first to welcome her to this incredible team. Working with her over the past several months has given me tremendous confidence. She brings genuine curiosity, clarity and a deep commitment to understanding players, creators, and the decisions that shape our future. We know this is an important moment for our fans, partners, and team, and we’re committed to getting it right. I’ll remain in an advisory role through the summer to support a smooth handoff.
I’m also grateful for the strength of our studios organization. Matt Booty and our studios teams continue to build an incredible portfolio, and I have full confidence in the leadership and creative momentum across our global studios. I want to congratulate Matt on his promotion to EVP and Chief Content Officer.
As part of this transition, Sarah Bond has decided to leave Microsoft to begin a new chapter. Sarah has been instrumental during a defining period for Xbox, shaping our platform strategy, expanding Game Pass and cloud gaming, supporting new hardware launches, and guiding some of the most significant moments in our history. I’m grateful for her partnership and the impact she’s had, and I wish her the very best in what comes next.
Most of all, to everyone in Microsoft Gaming, I want to say “thank you.” I’ve learned so much from this team and community, grown alongside you, and been continually inspired by the creativity, courage, and care you bring to players, creators, and to one another every day.
I’m incredibly proud of what we’ve built together over the last 25 years, and I have complete confidence in all of you and in the opportunities ahead. I’ll be cheering you on in this next chapter as Xbox’s proudest fan and player.
Phil
XBL: P3
ASHA SHARMA MESSAGE
Dear team,
Today I begin my role as CEO of Microsoft Gaming.
I feel two things at once: humility and urgency.
Humility because this team has built something extraordinary over decades. Urgency because gaming is in a period of rapid change, and we need to move with clarity and conviction.
I am stepping into work shaped by generations of artists, engineers, designers, writers, musicians, operators and more who create worlds that have brought joy and deep personal meaning to hundreds of millions of players. The level of craft here is exceptional, and it is amplified by Xbox, which was founded in the belief that the power of games connects people and pushes the industry forward.
Thank you to Phil for his leadership, and to every studio, platform, and operations team that built this foundation. We are stewards of some of the most loved stories and characters in entertainment and bring players and creators together around the fun and community of gaming in entirely new ways.
My first job is simple: understand what makes this work and protect it.
That starts with three commitments.
First, great games.
Everything begins here. We must have great games beloved by players before we do anything. Unforgettable characters, stories that make us feel, innovative game play, and creative excellence. We will empower our studios, invest in iconic franchises, and back bold new ideas. We will take risks. We will enter new categories and markets where we can add real value, grounded in what players care about most.
I promoted Matt Booty in honor of this commitment. He understands the craft and the challenges of building great games, has led teams that deliver award-winning work, and has earned the trust of game developers across the industry.
Second, the return of Xbox.
We will recommit to our core Xbox fans and players, those who have invested with us for the past 25 years, and to the developers who build the expansive universes and experiences that are embraced by players across the world.
We will celebrate our roots with a renewed commitment to Xbox starting with console which has shaped who we are. It connects us to the players and fans who invest in Xbox, and to the developers who build ambitious experiences for it.
Gaming now lives across devices, not within the limits of any single piece of hardware. As we expand across PC, mobile, and cloud, Xbox should feel seamless, instant, and worthy of the communities we serve. We will break down barriers so developers can build once and reach players everywhere without compromise.
Third, future of play.
We are witnessing the reinvention of play.
To meet the moment, we will invent new business models and new ways to play by leaning into what we already have: iconic teams, characters, and worlds that people love. But we will not treat those worlds as static IP to milk and monetize. We will build a shared platform and tools that empower developers and players to create and share their own stories.
As monetization and AI evolve and influence this future, we will not chase short-term efficiency or flood our ecosystem with soulless AI slop. Games are and always will be art, crafted by humans, and created with the most innovative technology provided by us.
The next 25 years belong to the teams who dare to build something surprising, something no one else is willing to try, and have the patience to see it through. We have done this before, and I am here to help us do it again. I want to return to the renegade spirit that built Xbox in the first place. It will require us to relentlessly question everything, revisit processes, protect what works, and be brave enough to change what does not.
Thank you for welcoming me into this journey.
Asha
MATT BOOTY MESSAGE
I read Phil’s note with much gratitude. He has been a steady champion for game creators and our studio teams, and I’ve learned so much from his leadership over the years. All our games have benefited from his foundational support. I’m also grateful to Satya for his ongoing commitment to gaming and holding a vision of how it can connect back to the larger company.
Looking forward, I’m excited to partner with Asha as our next CEO. Our first conversations centered on her commitment to making great games and the role that plays in our overall success. She asks questions, pushes for clarity, and wants our choices grounded in player and developer needs. That mindset matters as the industry around us is changing quickly: how players engage, how games are made, and how business models and platforms evolve.
We have good reasons to believe in what’s ahead. This organization and its franchises have navigated change for decades, and our strength comes from teams who know how to adapt and keep delivering. That confidence is grounded in a strong pipeline of established franchises, new bets we believe in, and clear player demand for what we are building.
My focus is on supporting the teams and leaders we have in place and creating the conditions for them to do their best work. To be clear, there are no organizational changes underway for our studios.
Thanks for everything you do for players and for each other.
Matt
The post Asha Sharma named EVP and CEO, Microsoft Gaming appeared first on The Official Microsoft Blog.
Satya Nadella, Chairman and CEO, and members of his executive team shared the following communications with employees today. SATYA NADELLA MESSAGE Gaming has been part of Microsoft from the start. Flight Simulator shipped before Windows, and you can practically ray‑trace a line from DirectX in the ’90s to the accelerated‑compute era we’re in today. As…
The post Asha Sharma named EVP and CEO, Microsoft Gaming appeared first on The Official Microsoft Blog.
Read More
A milestone achievement in our journey to carbon negative
In 2020, Microsoft announced a moonshot commitment to become carbon negative by 2030 — accelerating work across our company to advance the partnerships and technologies needed to advance sustainability for our businesses, our customers and the world. A key milestone on this journey was our aim to match 100% of our annual global electricity consumption with renewable energy(1) by 2025. Today, we are pleased to share that Microsoft has achieved this milestone(2). This progress helps drive investment into the power systems where we operate, expand clean energy supply and advance broader energy innovation.
Over a decade of investment: 40 gigawatts of new renewable energy contracted
What began in 2013 with a single 110 megawatt (MW) power purchase agreement (PPA) in Texas — a small first step to demonstrate how corporate procurement could scale clean energy(3) — has evolved into one of the largest clean energy portfolios in the world. This first deal not only supported Microsoft’s early cloud services but also set in motion a decade of commercial partnerships and learning-by-doing that served to demonstrate how corporate demand for advanced energy solutions can help to achieve a more affordable and sustainable power system, while supporting reliability for customers.
Since our carbon negative announcement in 2020, we have contracted 40 gigawatts (GW) of new renewable energy supply across 26 countries, working with more than 95 utilities and developers across 400+ contracts and counting. To put that amount in perspective — that’s enough energy to power about 10 million US homes. Of that contracted volume, 19 GW are now online, delivering new clean energy supply to the power grid, while the remainder are slated to come online over the next five years.
Our new renewable energy procurement continues to deliver significant environmental benefits, including the reduction of Microsoft’s reported Scope 2 carbon dioxide emissions by an estimated 25 million tons(4) and the mobilization of billions of dollars’ worth of private investment in regions where we operate.
Catalyzing market investment through bankable, repeatable models
Microsoft is among the early pioneers in developing technical and commercial practices that help advance bankable, repeatable and scalable procurement tools suitable for each market. Our clean energy purchasing navigates a global patchwork of power market designs, requiring creativity in how we balance cost, time to market and project sizing in our portfolio across planning, contracting and management.
Our work has benefited from a broad coalition of partners helping to build this market together. According to Bloomberg New Energy Finance, more than 200 global corporations collectively purchased nearly 200 GW of clean energy around the world since 2008. Working alongside other clean energy buyers — as well as hundreds of utilities, manufacturers, financiers, developers and engineers — we have helped reduce transaction costs, expand developer access to financing and streamline procurement approaches that other buyers can adopt.
This global flywheel of partnership, investment, technology and policy innovation is expected to continue to facilitate billions of dollars’ worth of investment into infrastructure and jobs. And as we’ve seen repeatedly, when Microsoft sends a clear market signal for world-class, first-of-a-kind technologies and infrastructure, the power sector rises to the challenge. Our procurement over the past decade has demonstrated that partnerships, communities and innovation are essential ingredients that help to accelerate first-of-a-kind technologies and infrastructure at scale.
Scaling partnerships to scale infrastructure
Critical to Microsoft’s success in expanding digital infrastructure and supporting our local communities is our ability to build trusted partnerships with the over 95 global energy suppliers that support our clean energy portfolio. We have sourced clean energy through multiple requests for proposal or information, bilateral engagements and clean tariffs to evaluate over 5,000 unique carbon-free energy projects around the world.
Today, Microsoft has six energy company partners with which we have over 1 GW of contracted renewable energy capacity, and more than 20 energy supplier partners where each partner has at least five separate renewable energy projects with Microsoft — evidence of the durable, repeatable relationships necessary to scale clean energy. Combining scale with speed, Microsoft’s landmark 10.5 GW framework agreement with Brookfield sends a long-term, 2030 demand signal to the market that enables developers to raise funding more efficiently, bolster supply chains, hire engineers and construct world-class energy infrastructure.
Putting communities first
Our renewable energy procurement has mobilized billions of dollars in private investment, supported thousands of jobs across the communities where we operate and delivered meaningful co-benefits. Through partnerships with developers and nonprofit organizations, we’ve worked to embed community-driven benefits into our energy portfolio. These benefits include robust infrastructure, economic inclusion and support for community-focused organizations.
Our support for communities shows up in projects like our 500 MW PPA with Sol Systems, or our 250 MW PPA with Volt Energy Utility that provided local training and jobs, as well as grants to community nonprofit organizations and habitat restoration. We’ve also signed over 1.5 GW of distributed solar, bringing clean energy directly into hundreds of communities around the world. Landmark agreements like our 500 MW offtake with Pivot Energy, or our 270 MW offtake with PowerTrust are expected to foster employment, energy cost savings and grid resilience in communities across the United States, Mexico and Brazil. More details on the above examples and our approach to community benefits in clean energy agreements can be found in a dedicated Microsoft whitepaper.
Innovation unlocks new markets and pathways
Microsoft’s clean energy procurement continues to play an important role in catalyzing technical, commercial and regulatory innovation. Our commercial efforts have helped lower barriers to entry into new markets and expand access into multi-technology contracts that accelerate decarbonization.
In Japan, Microsoft signed one of the first corporate PPAs in the country’s restructured power market. Our 25 MW, 20-year agreement with Shizen represents the first single-asset virtual PPA executed in the country, which helped pave the way to over 2GWs of corporate procurement since 2024, according to Bloomberg New Energy Finance. Alongside opening new markets, we have structured several multi-technology offtakes in nascent markets for corporate procurement. In India, Microsoft purchased a combined 437 MW solar/wind hybrid offtake from Renew, where our projects will support energy access and rural electrification. In Microsoft’s home state of Washington, our datacenters in Douglas County are supplied by 100% carbon-free energy, as we leverage a creative blend of new wind power and hydropower storage to deliver around-the-clock clean energy.
Looking forward to 2030 and beyond
In 2025, the International Energy Agency (IEA) described a new “Age of Electricity,” marked by accelerating electricity demand from electric vehicles, air conditioners, data centers and heat pumps. As the world electrifies more of the economy, the demand for affordable, reliable and clean electricity will continue to rise.
Our experience building Microsoft’s clean energy portfolio both reflects and furthers global trends. According to IEA data, since 2000, renewable energy generation has expanded nearly four-fold. In many power markets across the world, clean energy is one of the fast-growing sources of generation, and often the one with the fastest time-to-market. Corporate buyers like Microsoft continue to serve as an important catalyst in driving commercial demand for innovation and infrastructure across the power industry.
As we continue our journey toward becoming carbon negative by 2030, Microsoft will continue to push for an expansive focus on adding all forms of carbon-free electricity solutions, complementing and adding to our portfolio of renewable energy resources. We recognize that the world’s rising electricity needs require a balanced, all-of-the-above decarbonization strategy to meet global economic growth and environmental goals, and our sustainability goals will continue to support this approach moving forward. Such a strategy requires a broader set of carbon-free energy and grid-enabling technologies, including nuclear energy, next-generation grid infrastructure and carbon capture technology. Just as renewable energy was a relatively small part of global energy grids in 2013 when we signed our first PPA, today many advanced energy technologies remain early in their development but offer significant promise to accelerate progress towards an affordable, reliable and sustainable energy future.
Microsoft has already taken early steps to support the advancement of a broader set of carbon-free energy technologies as we partner with Helion and Constellation Energy on a 50 MW fusion project in Washington state and work with Constellation to restart the 835 MW Crane Clean Energy Center in Pennsylvania. Microsoft’s Climate Innovation Fund has allocated $806 million of capital to 67 investees, with 38% directed toward Energy Systems — advancing carbon-free power and fuels, energy storage and energy management solutions.
We welcome continued collaboration with our power sector partners to bring these innovations to market and incorporate new technology tools in the process to accelerate their development.
We will continue to build and leverage new AI-driven tools to design, permit and deploy new power technologies that help expand and more efficiently operate the electricity grid, bringing more clean energy online faster. This work is exemplified by our recently announced collaborations with Idaho National Laboratory and the Midcontinental System Operator, among other examples.
And as we advance innovative energy technologies, we recognize that standards must evolve alongside innovation. That is why we will continue participating in industry forums that strengthen carbon accounting frameworks — so that our clean energy procurement is measured with greater accuracy and delivers real world emissions reductions, with a continued focus on maintaining the high level of integrity that the world has come to expect from Microsoft.
Our carbon negative commitment remains a call to action — for Microsoft, our customers and the broader technology sector — to invest in an affordable, reliable and sustainable power system. As we look toward 2030, that call to action has never been clearer.
Gratitude — and momentum for the work ahead
Today’s milestone represents a shared achievement among the utility professionals, clean energy developers, community leaders, technology innovators and forward-thinking policymakers who continue the deployment of renewable energy. Meeting today’s milestone shows what partnership can deliver in bringing big ideas to life. The future of carbon-free energy is one that we will create – together.
As Microsoft’s Chief Sustainability Officer, Melanie Nakagawa leads the company’s targets to be carbon negative, water positive, and zero waste by 2030. She brings deep experience at the intersection of policy, business, and technology to advance climate and sustainability solutions globally.
As President of Cloud Operations + Innovation at Microsoft, Noelle Walsh leads the organization that powers the global Microsoft Cloud. She oversees the company’s physical cloud infrastructure and operations, with a charter focused on safety, security, availability, sustainability, and competitive infrastructure growth—bringing decades of global operational leadership.
Footnotes
- Renewable energy is defined within Microsoft’s fact sheet https://aka.ms/SustainabilityFactsheet2025, which represents FY24 data.
- To date, Microsoft’s renewable energy target includes two primary categories: renewable energy from contracted projects and grid mix. The first is renewable energy delivered under PPAs or similar long-term contracting mechanisms, generally for new projects where our financial involvement in the project’s development is critical for its success. This category represents more than 90% of the renewable energy applied to achieve our 2025 target.The second category is “grid mix” – renewable energy supported via our standard utility relationships and rates, inclusive of policy programs such as renewable portfolio standards and state and utility decarbonization goals.Our 2025 100% renewable target does not include purchases from short-term, so-called “spot market” renewable energy credits (RECs) sourced from operational clean energy projects.With the above in mind, Microsoft leverages a straightforward formula to determine our 100% renewable energy metric on a global, annual basis. We update and further detail the methodology and assumptions behind this formula in our annual sustainability reports:

- Clean energy— also referred to in this blog as carbon free energy —is defined within Microsoft’s fact sheet https://aka.ms/SustainabilityFactsheet2025, which represents FY24 data.
- Reduction of reported Scope 2 emissions are calculated between FY20-25, the cumulative difference between location based and market-based emissions, excluding the use of short-term, so-called “spot market” RECs
The post A milestone achievement in our journey to carbon negative appeared first on The Official Microsoft Blog.
In 2020, Microsoft announced a moonshot commitment to become carbon negative by 2030 — accelerating work across our company to advance the partnerships and technologies needed to advance sustainability for our businesses, our customers and the world. A key milestone on this journey was our aim to match 100% of our annual global electricity consumption…
The post A milestone achievement in our journey to carbon negative appeared first on The Official Microsoft Blog.
Read More
Updates in two of our core priorities
Satya Nadella, Chairman and CEO, posted the below message to employees on Viva Engage this morning.
I am excited to share a couple updates in two of our core priorities: security and quality. Hayete Gallot is rejoining Microsoft as Executive Vice President, Security, reporting to me. I’ve also asked Charlie Bell to take on a new role focused on engineering quality, reporting to me.
Charlie and I have been planning this transition for some time, given his desire to move from being an org leader to being an IC engineer. And I love how energized he is to practice this craft here day in and day out!
Hayete joins us from Google where she was President, Customer Experience for Google Cloud. Before that, she spent more than 15 years at Microsoft with senior leadership roles across engineering and sales, playing multiple critical roles in building two of our biggest franchises – Windows and Office, leading our commercial solution areas’ go-to-market efforts. And she was instrumental in the design and implementation of our Security Solution Area. She brings an ethos that combines product building with value realization for customers, which is critical right now.
As we shared during our quarterly earnings last week, we have great momentum in security, including progress with Security Copilot agents, strong Purview adoption, and continued customer growth, and we will build on this.
We have a deep bench of talent and leaders across our security business, and this team will now report to Hayete. Additionally, Ales Holecek will take on a new role as Chief Architect for Security, reporting to Hayete. Ales has spent years leading architecture and development across some of our most important platforms and will help bring that same sensibility to security and its connections back to our existing scale businesses and the Agent Platform.
As we shared yesterday, we have a new operating rhythm with commercial cohorts, and Hayete and her team will now be accountable for our security product rhythms as part of this process.
Charlie built our Security, Compliance, Identity, and Management organization and helped rally the company behind the Secure Future Initiative. And we’re fortunate to have his continued focus and leadership on another one of our top priorities. With our Quality Excellence Initiative, we have increased accountability and accelerated progress against our engineering objectives to ensure we always deliver durable, high quality-experiences at global scale. And Charlie will partner closely with Scott Guthrie and Mala Anand on this work.
I’m excited to welcome Hayete back to Microsoft to advance this mission critical work, and grateful to Charlie for all he has done for our security business and what he will continue to do for the company.
Satya
The post Updates in two of our core priorities appeared first on The Official Microsoft Blog.
Satya Nadella, Chairman and CEO, posted the below message to employees on Viva Engage this morning. I am excited to share a couple updates in two of our core priorities: security and quality. Hayete Gallot is rejoining Microsoft as Executive Vice President, Security, reporting to me. I’ve also asked Charlie Bell to take on a…
The post Updates in two of our core priorities appeared first on The Official Microsoft Blog.Read More
How Microsoft is empowering Frontier Transformation with Intelligence + Trust
At Microsoft Ignite in November, we introduced Frontier Transformation — a holistic reimagining of business aligning AI with human ambition to help organizations achieve their highest aspirations and growth potential. While AI Transformation centered on efficiency and productivity, Frontier Transformation challenges us to do more for humanity by democratizing intelligence to unlock creativity and innovation for organizations and people around the world.
Across industries, our customers are leading the way to becoming Frontier; sharing three common traits anchored in a foundation of Intelligence + Trust. AI in the flow of human ambition, putting Copilots and agents directly in the tools people use; and ubiquitous innovation, empowering the maker in every role. These capabilities are served through Microsoft’s new intelligence layer: Work IQ, which understands how people work; Fabric IQ, which provides a trusted semantic layer for reasoning over an organization’s data; and Foundry IQ, the world’s leading AI app server powering safe, scalable agent experiences. Together, these capabilities put the “I” back in AI by grounding Copilots and agents in an organization’s own data, logic and workflows to fully understand operations and drive decisions that matter most. The third trait is observability at every layer of the stack; ensuring trust, safety and reliable outcomes. As the control plane to observe, govern and secure all AI artifacts, Agent 365 provides a unified view of every AI agent running in an organization’s environment — whether built on Microsoft’s platforms or others.
Our customers and partners are showcasing what can be achieved with Frontier Transformation and human ambition paired with Copilots + agents, and I am pleased to share their stories — including many onstage with us at Ignite. Their journeys demonstrate what is possible for organizations everywhere when AI-first innovation is built upon Intelligence + Trust.
Putting AI in the flow of human ambition so people can achieve more in every role, across every industry
Using a secure Azure foundation, Epic embedded AI directly into clinical workflows, enabling hundreds of thousands of clinicians worldwide to work faster and deliver higher quality care. Epic AI generates documentation in the flow of work, reducing time spent on prior authorization questions by over 40% and surfacing critical insights that could be missed during manual review. In one month alone, Epic AI automatically generated more than 16 million patient record summaries, helping clinicians reduce administrative workload and speed time to treatment. AI-driven imaging follow-up also boosted early cancer detection at the Christ Hospital to 69%, far above the national 46% average. By delivering real improvements like these today, Epic is building confidence and familiarity that will accelerate adoption of tomorrow’s AI-enabled breakthroughs in precision medicine, drug discovery and the understanding of disease.
To create a consistent experience across its entire workforce, heritage brand Levi Strauss & Co. standardized on Windows 11, Copilot+ PCs, Intune, Microsoft 365 Copilot and Microsoft Foundry to give every team — from designers to retail associates to distribution centers — a modern, AI-powered workplace. With Copilot and agents accelerating workflows and eliminating fragmentation across legacy systems, teams can model demand faster, bring products to market with greater precision and spend more time on creative and commercial work that strengthens the brand. They are also reducing operational noise, strengthening security and scaling insights across design, merchandising, retail and supply chain. With a unified, secure Microsoft platform, Levi’s is enriching the employee experience, driving sharper execution and building durable advantage in an increasingly dynamic market.
London Stock Exchange Group (LSEG) is unifying the data foundation of global finance by modernizing its platform on Microsoft Fabric and bringing trusted financial intelligence directly into Microsoft 365 Copilot. The company has consolidated 30 legacy data systems, 1,200 datasets and more than 33 petabytes of financial content into a single, governed environment. This unified foundation is now delivering faster, cleaner insights to 44,000 customers in over 170 countries and cutting product development timelines from years to months. With Fabric and Copilot working together, financial professionals can access LSEG’s expansive data and analytics directly in the flow of work — helping them make decisions with greater speed and confidence while reducing friction across risk modeling, regulatory compliance and investment workflows. By simplifying the data estate first, LSEG is safely surfacing insights through Microsoft 365 Copilot and empowering teams across the organization to innovate with consistency, compliance and at global scale.
The University of Manchester is the first higher education institution in the world to provide Microsoft 365 Copilot access and training to all 65,000 students and staff. Learners and researchers will gain equitable access to Copilot-powered tools to strengthen teaching, accelerate interdisciplinary discovery and build future-ready skills. For students, this is an essential aid for revision, translation and academic success; while university leadership can ensure responsible use policies and training so every student can use AI ethically and confidently. Researchers can synthesize vast volumes of information across fields from photonic materials to biomedical science, enabling faster progress on challenges from cancer treatment to sustainable manufacturing; while operationally, Copilot helps administrative staff free their time for higher value work. The University of Manchester is defining a new model for modern higher education by pairing its decades of AI innovation with equitable access to cutting edge AI tools that prepare the next generation of citizens, innovators and creators.
Inspiring the maker in every one of us with ubiquitous innovation that amplifies creativity and accelerates impact
Adobe is redefining creativity, productivity and customer experience by infusing AI deeply into its product ecosystem, powered by Azure, Copilot and Microsoft Foundry. By supporting third-party models directly inside Adobe Firefly, creators can choose the best model for the job while unlocking new agentic capabilities across Photoshop, Acrobat and Adobe’s Customer Experience Orchestration solutions; resulting in significant acceleration in workflows through AI-driven agents. With daily use of GitHub Copilot, its engineering organization is boosting developer productivity and speed to innovation. The company is also focused on enterprise-grade governance and data provenance to help customers trust and verify content as AI adoption grows — further reinforced by Adobe Marketing Agent for Microsoft 365 Copilot as part of the Agent 365 preview. By combining open model choice with responsible AI infrastructure, Adobe is giving customers creative choice and operational confidence, while unlocking faster innovation, without compromising security, trust or brand integrity.
In an industry facing unprecedented pressure — from rising costs to shrinking margins — Land O’Lakes is accelerating AI innovation across American agriculture by developing a new digital assistant called Oz. Built on models within Microsoft Foundry, the digital assistant turns an 800-page crop protection guide into instant, data-rich insights delivered through intuitive workflows. The Copilot solution provides agronomic expertise throughout the growing season tailored to each grower’s soil, crop and environmental conditions — with personalized recommendations that address unique farm-level challenges. This AI-enhanced solution streamlines access to critical information so experts can help growers make faster, more confident decisions that help control input costs, boost crop yields and drive long-term success. The work Land O’Lakes is doing shows how AI can democratize intelligence — empowering farmers to grow their businesses while feeding their communities and building a more resilient agricultural ecosystem.
Mercedes Benz is transforming every layer of its enterprise — from headquarters to the factory floor to the driving experience. Built on Azure and powered by Copilot, the company is moving toward making Copilot available broadly, with over 50 business areas already using Copilot Studio to build and automate their own workflows and agents. GitHub Copilot has driven a 70% increase in engagement with software developers, shifting teams from routine coding to higher value innovation. On the factory floor, Mercedes’ MO360 data platform connects 30 passenger plants, and its Digital Factory Chat multiagent system is cutting issue diagnosis time from days to minutes. With the next generation of Hey Mercedes — powered by Azure OpenAI, Bing, Microsoft Teams, Intune and Microsoft 365 Copilot — the vehicle becomes a “third workspace,” enabling productivity through natural voice. Investing in AI skills, tools and platform breadth is helping Mercedes Benz build enterprise capability and bend the curve on innovation; with efficiency gains that help teams innovate and drive operational impact internally and across customer experiences. We also recently announced our partnership with the Mercedes-AMG PETRONAS F1 Team to drive innovation across its racing operations. With Microsoft’s cloud and enterprise AI stack — including Azure AI, Microsoft 365 and GitHub — it is turning data into real-time intelligence that powers faster decisions, smarter strategies and sustained competitive advantage on and off the track.
Pantone is transforming decades of color expertise into a next-generation AI offering with the launch of its Pantone Palette Generator, built entirely on Microsoft Foundry and Azure AI. By applying a multi-agent architecture powered by Azure AI Search, Azure Cosmos DB and Azure OpenAI, Pantone is bringing instant, trend-backed color guidance directly into creative workflows. What once required weeks of research across physical color books and expert archives can now be achieved in seconds, enabling designers, brands and product teams to move from inspiration to production with greater speed and accuracy. Using GitHub Copilot, its engineering team accelerated development of initial proof of concept by more than 200 hours, allowing the company to focus on enhancing agent orchestration and color science logic. As Pantone expands its AI-native platform, it is also helping creators build new skills — learning how to integrate agentic workflows, prompt engineering and trend-driven insights into the design process. The platform modernizes Pantone’s iconic color system and positions the company to scale new digital services as it evolves its multiagent capabilities and reshapes business processes.
Westpac is bringing Copilot to more than 35,000 employees across its global workforce — the largest Microsoft 365 Copilot rollout to date in Australia and the largest deployment in financial services within Asia Pacific. This comes after a successful pilot with 15,000 employees that delivered strong business outcomes and freed up significant time for users each month. The company is now deploying AI to accelerate work, reduce friction and reinvent how employees engage with their customers. The bank is pairing its Copilot implementation with AI education programs and Microsoft Copilot Studio to build custom agents for HR and IT, while creating a new Azure-based innovation sandbox to enable teams to quickly experiment with AI-enabled workflows and solutions. Westpac’s move to embed AI at scale is a strategic investment in people and a catalyst for more efficient, higher value work—underscoring how responsible, enterprise grade AI can drive meaningful value for employees, customers and shareholders.
Bringing observability to every layer of the stack to ensure outcomes are reliable, safe and aligned with the business
ServiceNow is helping its customers accelerate AI adoption safely by integrating with Microsoft Agent 365 — Microsoft’s control plane for securing and governing agents at scale. By enabling them to bring their agentic workflows into a unified governance environment, ServiceNow can help them gain visibility, access controls and ensure compliance across AI systems. Companies like AstraZeneca are already using ServiceNow AI Control Tower together with Agent 365 to manage lab and operational workflows, saving 90,000 hours that researchers can redirect toward discovering lifesaving drugs. The ability to see, trust and scale AI agents gives organizations confidence to move quickly without losing control. ServiceNow is demonstrating how advanced workflow AI delivers its greatest value when paired with enterprise-wide governance — giving organizations the speed and efficiency they want while maintaining the control required for mission critical operations.
To help organizations safely accelerate agentic AI adoption, Workday is building solutions that work with Agent 365 for a unified way to govern its agents. The company is helping businesses address the shift from shadow IT to shadow AI as employees begin incorporating AI agents into the way they work. Workday’s Agent System of Record helps customers establish governance and oversight around the work AI agents are doing, allowing them to scale intelligent workflows with confidence. Workday highlights that responsible AI acceleration requires combining powerful automation with a shared control plan — making the secure, compliant path the easiest one, and enabling organizations to scale without losing oversight.
As organizations move to operationalize agentic AI at scale, Genspark is integrating with Agent 365 to provide a governed, enterprise grade path for deploying its rapidly growing ecosystem of Super Agents. As employees increasingly experiment with personal AI creation tools, companies are looking for ways to shift from unmanaged shadow AI to secure, outcome-driven agent workflows. Through Agent 365, the platform enables organizations to register its agents alongside those from Microsoft and other partners, apply unified governance policies, maintain consistent identity and permission controls, and ensure all agent generated outputs align with corporate, regulatory and data residency requirements. This governance layer extends the value of its own agent registry — where more than 80 specialized agents and millions of user generated prompts are already driving productivity — allowing customers to safely scale agentic creation across roles, teams and industries. Genspark demonstrates that responsible acceleration requires pairing powerful, outcome first agent experiences with a shared control plane like Agent 365, enabling AI to scale without losing oversight.
At Ignite, we also introduced Agent Factory — a new way for organizations to build and scale AI agents with confidence — bringing together Work IQ, Fabric IQ and Foundry IQ under a single, ROI-driven model. Agent Factory enables companies to take complex workflows — from claims processing to freight forwarding to supply chain management — and turn them into measurable, production-ready agentic systems supported by Microsoft’s forward deployed engineers, partner ecosystem and built-in governance. As agents move from experimentation to mission critical automation, companies need a standardized, governed path to build and scale them, and Agent Factory is the solution — tying innovation directly to measurable ROI.
Our ambition with Frontier Transformation is to ensure that the maker in every one of us is empowered by everything we build and deliver. As we enter the second half of the fiscal year, one thing is clear: our customers and partners are redefining what can be achieved as Frontier Firms. Built on a foundation of Intelligence + Trust, and with the full breadth of Microsoft’s cloud and AI solutions, we are committed to helping every organization scale AI-first innovation. Our model diverse, open and heterogenous platform unifies your IQ assets — and the human ambition that lives inside of your company — to deliver outcomes that help you achieve your highest aspirations. Thank you for your continued partnership and trust as we continue shaping what is possible together.
The post How Microsoft is empowering Frontier Transformation with Intelligence + Trust appeared first on The Official Microsoft Blog.
At Microsoft Ignite in November, we introduced Frontier Transformation — a holistic reimagining of business aligning AI with human ambition to help organizations achieve their highest aspirations and growth potential. While AI Transformation centered on efficiency and productivity, Frontier Transformation challenges us to do more for humanity by democratizing intelligence to unlock creativity and innovation for…
The post How Microsoft is empowering Frontier Transformation with Intelligence + Trust appeared first on The Official Microsoft Blog.Read More
Maia 200: The AI accelerator built for inference
Today, we’re proud to introduce Maia 200, a breakthrough inference accelerator engineered to dramatically improve the economics of AI token generation. Maia 200 is an AI inference powerhouse: an accelerator built on TSMC’s 3nm process with native FP8/FP4 tensor cores, a redesigned memory system with 216GB HBM3e at 7 TB/s and 272MB of on-chip SRAM, plus data movement engines that keep massive models fed, fast and highly utilized. This makes Maia 200 the most performant, first-party silicon from any hyperscaler, with three times the FP4 performance of the third generation Amazon Trainium, and FP8 performance above Google’s seventh generation TPU. Maia 200 is also the most efficient inference system Microsoft has ever deployed, with 30% better performance per dollar than the latest generation hardware in our fleet today.
Maia 200 is part of our heterogenous AI infrastructure and will serve multiple models, including the latest GPT-5.2 models from OpenAI, bringing performance per dollar advantage to Microsoft Foundry and Microsoft 365 Copilot. The Microsoft Superintelligence team will use Maia 200 for synthetic data generation and reinforcement learning to improve next-generation in-house models. For synthetic data pipeline use cases, Maia 200’s unique design helps accelerate the rate at which high-quality, domain-specific data can be generated and filtered, feeding downstream training with fresher, more targeted signals.
Maia 200 is deployed in our US Central datacenter region near Des Moines, Iowa, with the US West 3 datacenter region near Phoenix, Arizona, coming next and future regions to follow. Maia 200 integrates seamlessly with Azure, and we are previewing the Maia SDK with a complete set of tools to build and optimize models for Maia 200. It includes a full set of capabilities, including PyTorch integration, a Triton compiler and optimized kernel library, and access to Maia’s low-level programming language. This gives developers fine-grained control when needed while enabling easy model porting across heterogeneous hardware accelerators.
Engineered for AI inference
Fabricated on TSMC’s cutting-edge 3-nanometer process, each Maia 200 chip contains over 140 billion transistors and is tailored for large-scale AI workloads while also delivering efficient performance per dollar. On both fronts, Maia 200 is built to excel. It is designed for the latest models using low-precision compute, with each Maia 200 chip delivering over 10 petaFLOPS in 4-bit precision (FP4) and over 5 petaFLOPS of 8-bit (FP8) performance, all within a 750W SoC TDP envelope. In practical terms, Maia 200 can effortlessly run today’s largest models, with plenty of headroom for even bigger models in the future.
Crucially, FLOPS aren’t the only ingredient for faster AI. Feeding data is equally important. Maia 200 attacks this bottleneck with a redesigned memory subsystem. The Maia 200 memory subsystem is centered on narrow-precision datatypes, a specialized DMA engine, on-die SRAM and a specialized NoC fabric for high‑bandwidth data movement, increasing token throughput.
Optimized AI systems
At the systems level, Maia 200 introduces a novel, two-tier scale-up network design built on standard Ethernet. A custom transport layer and tightly integrated NIC unlocks performance, strong reliability and significant cost advantages without relying on proprietary fabrics.
Each accelerator exposes:
- 2.8 TB/s of bidirectional, dedicated scaleup bandwidth
- Predictable, high-performance collective operations across clusters of up to 6,144 accelerators
This architecture delivers scalable performance for dense inference clusters while reducing power usage and overall TCO across Azure’s global fleet.
Within each tray, four Maia accelerators are fully connected with direct, non‑switched links, keeping high‑bandwidth communication local for optimal inference efficiency. The same communication protocols are used for intra-rack and inter-rack networking using the Maia AI transport protocol, enabling seamless scaling across nodes, racks and clusters of accelerators with minimal network hops. This unified fabric simplifies programming, improves workload flexibility and reduces stranded capacity while maintaining consistent performance and cost efficiency at cloud scale.
A cloud-native development approach
A core principle of Microsoft’s silicon development programs is to validate as much of the end-to-end system as possible ahead of final silicon availability.
A sophisticated pre-silicon environment guided the Maia 200 architecture from its earliest stages, modeling the computation and communication patterns of LLMs with high fidelity. This early co-development environment enabled us to optimize silicon, networking and system software as a unified whole, long before first silicon.
We also designed Maia 200 for fast, seamless availability in the datacenter from the beginning, building out early validation of some of the most complex system elements, including the backend network and our second-generation, closed loop, liquid cooling Heat Exchanger Unit. Native integration with the Azure control plane delivers security, telemetry, diagnostics and management capabilities at both the chip and rack levels, maximizing reliability and uptime for production-critical AI workloads.
As a result of these investments, AI models were running on Maia 200 silicon within days of first packaged part arrival. Time from first silicon to first datacenter rack deployment was reduced to less than half that of comparable AI infrastructure programs. And this end-to-end approach, from chip to software to datacenter, translates directly into higher utilization, faster time to production and sustained improvements in performance per dollar and per watt at cloud scale.
Sign up for the Maia SDK preview
The era of large-scale AI is just beginning, and infrastructure will define what’s possible. Our Maia AI accelerator program is designed to be multi-generational. As we deploy Maia 200 across our global infrastructure, we are already designing for future generations and expect each generation will continually set new benchmarks for what’s possible and deliver ever better performance and efficiency for the most important AI workloads.
Today, we’re inviting developers, AI startups and academics to begin exploring early model and workload optimization with the new Maia 200 software development kit (SDK). The SDK includes a Triton Compiler, support for PyTorch, low-level programming in NPL and a Maia simulator and cost calculator to optimize for efficiencies earlier in the code lifecycle. Sign up for the preview here.
Get more photos, video and resources on our Maia 200 site and read more details.
Scott Guthrie is responsible for hyperscale cloud computing solutions and services including Azure, Microsoft’s cloud computing platform, generative AI solutions, data platforms and information and cybersecurity. These platforms and services help organizations worldwide solve urgent challenges and drive long-term transformation.
The post Maia 200: The AI accelerator built for inference appeared first on The Official Microsoft Blog.
Today, we’re proud to introduce Maia 200, a breakthrough inference accelerator engineered to dramatically improve the economics of AI token generation. Maia 200 is an AI inference powerhouse: an accelerator built on TSMC’s 3nm process with native FP8/FP4 tensor cores, a redesigned memory system with 216GB HBM3e at 7 TB/s and 272MB of on-chip SRAM, plus…
The post Maia 200: The AI accelerator built for inference appeared first on The Official Microsoft Blog.
Read More
Announcing Open to Work: How to Get Ahead in the Age of AI
The work we do, and the way we do it, is always changing. Each of us has a memory of how we once did a task regularly, the tools we used and how both the task and the tools have since changed so much they are nearly unrecognizable. Because we are living and working in the “now,” change feels both personal and fast, so it is always worth remembering that this has happened before, maybe not in just this way or with this speed.
And it is true. AI is rewriting work. How we do our jobs. How roles change. How careers are built. The skills we need. Some of that is exciting. Some of it can feel overwhelming. What we remember and have learned from previous times is that in moments like this, people are open to work and don’t just need new tools. They need a new mindset, a clearer understanding of what’s changing and a path forward.
That’s why today we’re announcing Open to Work: How to Get Ahead in the Age of AI, LinkedIn’s first book, by CEO Ryan Roslansky and Chief Economic Opportunity Officer Aneesh Raman. The book explores how AI is reshaping work and what that shift means for the people navigating it every day.
Microsoft and LinkedIn sit at the intersection of how work is done and how careers are built. We share a belief that the future of work will be driven by human creativity and ingenuity, not technology alone. When humans stay at the center, AI amplifies what people do best and creates new economic opportunity. Open to Work is grounded in that belief and focused on what’s happening now, not abstract predictions about the future.
Ryan’s leadership at LinkedIn and as head of engineering for Microsoft 365 Copilot gives him a rare perspective on this moment. He sees how AI is built, how it shows up in everyday work and what it takes to adapt. Aneesh’s role gives him unique insight into how together we can use this moment of change to create economic opportunity for every member of the global workforce.
The book is backed by real data — insights from experts, LinkedIn’s global network, Microsoft customers and the Work Trend Index. The goal isn’t hype. It’s clarity about how work is changing and how people can respond in practical, meaningful ways.
For professionals, Open to Work is about agency — what you delegate to AI, what skills to deepen and how you stay relevant as roles evolve. For leaders, it’s about rethinking how work gets organized and cultivating a Frontier mindset: the conviction that the most important innovations happen at the edges, where uncertainty is highest and the opportunity to shape what comes next is greatest. And for Microsoft and LinkedIn employees, it’s a reminder of the responsibility we share to shape the future of work in a thoughtful, human-centered way.
Open to Work publishes March 31 and is available for pre-order today.
Frank X. Shaw is responsible for defining and managing communications strategies worldwide, company-wide storytelling, product PR, media and analyst relations, executive communications, employee communications, global agency management and military affairs.
Top image: Aneesh Raman, left, LinkedIn chief economic opportunity officer, and Ryan Roslansky, LinkedIn CEO. Photo provided by LinkedIn.
The post Announcing Open to Work: How to Get Ahead in the Age of AI appeared first on The Official Microsoft Blog.
The work we do, and the way we do it, is always changing. Each of us has a memory of how we once did a task regularly, the tools we used and how both the task and the tools have since changed so much they are nearly unrecognizable. Because we are living and working in…
The post Announcing Open to Work: How to Get Ahead in the Age of AI appeared first on The Official Microsoft Blog.Read More
Microsoft announces acquisition of Osmos to accelerate autonomous data engineering in Fabric
Today, Microsoft is announcing the acquisition of Osmos, an agentic AI data engineering platform designed to help simplify complex and time-consuming data workflows.
Microsoft + Osmos: Extending Microsoft Fabric with agentic AI for data engineering
Organizations today face a common challenge: data is everywhere, but making it actionable is often manual, slow and expensive. Many teams spend most of their time preparing data instead of analyzing it. Osmos solves this problem by applying agentic AI to turn raw data into analytics and AI-ready assets in OneLake, the unified data lake at the core of Microsoft Fabric.
This acquisition builds on Microsoft Fabric’s goal to enable customers to unify all data and analytics into a single, secure platform. With the acquisition of Osmos, we are taking the next step toward a future where autonomous AI agents work alongside people — helping reduce operational overhead and making it easier for customers to connect, prepare, analyze and share data across the organization.
Looking ahead: Empowering customers to unlock value from data
Today’s announcement reinforces Microsoft’s focus to help every organization unlock more value from their data faster and with greater simplicity. The Osmos team will join Microsoft’s Fabric engineering organization to advance our vision for simpler, more intuitive and AI-ready data experiences.
Stay tuned for updates as we integrate Osmos into Fabric and continue our journey to empower every organization to achieve more with data. To follow updates, visit the Microsoft Fabric Blog.
Bogdan Crivat leads Microsoft’s Azure Data Analytics, building the Fabric engines for big data behind Power BI, and our AI-powered analytics infrastructure.
The post Microsoft announces acquisition of Osmos to accelerate autonomous data engineering in Fabric appeared first on The Official Microsoft Blog.
Today, Microsoft is announcing the acquisition of Osmos, an agentic AI data engineering platform designed to help simplify complex and time-consuming data workflows. Microsoft + Osmos: Extending Microsoft Fabric with agentic AI for data engineering Organizations today face a common challenge: data is everywhere, but making it actionable is often manual, slow and expensive. Many…
The post Microsoft announces acquisition of Osmos to accelerate autonomous data engineering in Fabric appeared first on The Official Microsoft Blog.Read More
From idea to deployment: The complete lifecycle of AI on display at Ignite 2025
By now, most people would agree that AI is in the process of fundamentally changing how we work and solve problems. But this technology is still too often thought of as an addition to the work we do, rather than a fundamental part of it.
AI is not something that you can just plop on the end of a finished product, like a cherry on top of a sundae. Instead, using AI responsibly and wisely means thinking through how it can be used most effectively at every layer, from the datacenter that powers AI functionality to the people and organizations that are benefiting from its capabilities.
As we embark on another Microsoft Ignite, our company is empowering the complete lifecycle of AI, creating tools and solutions to drive the next generation of digital transformation for every organization and at every level of the work they do.
We envision a future where organizations become Frontier Firms by using AI for unlocking creativity and innovation, allowing the next great ideas to surface.
These are some of the major themes we are seeing with this year’s Ignite products and features:
AI in the flow of human ambition
At Microsoft, we believe that all great ideas start with human ambition, which can be accessed and unlocked using the capabilities in Microsoft 365 Copilot and an agent ecosystem.
Work IQ amplifies your IQ. It’s the intelligence layer that enables Microsoft 365 Copilot and agents to know how you work, with whom you work and the content you collaborate on. Built on your data, memory and inference, it connects to the rich company knowledge in your emails, files, meetings and chats, plus your preferences, habits, work patterns and relationships. It allows Copilot to make connections, unlock insights and predict the next best action based on native integrations, not a patchwork of third-party connectors. And now, you can tap into the expertise of Work IQ with APIs to build agents tuned to your unique workflows and business needs.
Work IQ also is powering many of the updates across Microsoft 365 Copilot announced at Ignite today.
Ubiquitous innovation and intelligence
In a Frontier Firm, there are makers in every room of the house. People on the frontlines are closest to the work problems that need to be solved. They can create agents to help them in their day-to-day work.
How do AI agents know what to do with your data? Foundry IQ and Fabric IQ help AI agents understand what users are doing, bridge the gap between raw data and real-world business meaning and find the context to make decisions.
Fabric IQ brings together analytical, time series and location-based data with your operational systems under one shared model tied to business meaning. This gives you a live, connected view of your business, so both people and AI can act in real time. If you are a customer who is already using Power BI for your business intelligence reporting, all of that pre-existing data modeling work will act as an immediate accelerant, giving your agents the unique context that defines how your business runs.
Foundry IQ takes this further with a fully managed knowledge system designed to ground AI agents over multiple data sources — including Microsoft 365 (Work IQ), Fabric IQ, custom applications and the web. This single endpoint for knowledge has routing and intelligence built in, enabling higher-quality reasoning, safer actions and more value for builders.
Microsoft Agent Factory is a program that brings these agent IQ layers together to help organizations build agents with confidence. With a single metered plan, customers can start building with IQ using Microsoft Foundry and Copilot Studio. They can deploy their agents anywhere, including Microsoft 365 Copilot, with no upfront licensing and provisioning required. Eligible organizations can also tap into hands-on support from top AI Forward Deployed Engineers and access tailored role-based training to boost AI fluency across teams.
Observability at every layer
By 2028, businesses are projected to have[1] 1.3 billion AI agents automating workflows. Most organizations don’t yet have a way to observe, secure or govern them — if not governed, AI agents are the new shadow IT.
Microsoft Agent 365 enables you to observe, manage and secure your AI agents, whether the agents are created with Microsoft platforms, open-source frameworks or third-party platforms.
It equips them with many of the same apps and protections as people, tailored to agent needs, saving IT time and effort on integrating agents into business processes. It includes the Microsoft security solutions Defender, Entra, Purview and Foundry Control Plane to protect and govern agents, productivity tools including Microsoft 365 apps and Work IQ to help people work more efficiently and Microsoft 365 admin center to manage agents.
This is only a small selection of the many exciting features and updates we will be announcing at Ignite. As a reminder, you can view keynote sessions from Microsoft executives, including Judson Althoff, Scott Guthrie, Charles Lamanna, Asha Sharma and Ryan Roslansky, live or on-demand.
Plus, you can get more on all these announcements by exploring the Book of News, the official compendium of all today’s news.
Frank X. Shaw is responsible for defining and managing communications strategies worldwide, company-wide storytelling, product PR, media and analyst relations, executive communications, employee communications, global agency management and military affairs.
Related:
Partners leading the AI transformation: Microsoft Ignite 2025 recap
[1] IDC Info Snapshot, sponsored by Microsoft, 1.3 Billion AI Agents by 2028, May 2025 #US53361825
The post From idea to deployment: The complete lifecycle of AI on display at Ignite 2025 appeared first on The Official Microsoft Blog.
By now, most people would agree that AI is in the process of fundamentally changing how we work and solve problems. But this technology is still too often thought of as an addition to the work we do, rather than a fundamental part of it. AI is not something that you can just plop on…
The post From idea to deployment: The complete lifecycle of AI on display at Ignite 2025 appeared first on The Official Microsoft Blog.Read More
Microsoft, NVIDIA and Anthropic announce strategic partnerships
Anthropic to scale Claude on Azure
Anthropic to adopt NVIDIA architecture
NVIDIA and Microsoft to invest in Anthropic
Today Microsoft, NVIDIA and Anthropic announced new strategic partnerships. Anthropic is scaling its rapidly-growing Claude AI model on Microsoft Azure, powered by NVIDIA, which will broaden access to Claude and provide Azure enterprise customers with expanded model choice and new capabilities. Anthropic has committed to purchase $30 billion of Azure compute capacity and to contract additional compute capacity up to one gigawatt.
For the first time, NVIDIA and Anthropic are establishing a deep technology partnership to support Anthropic’s future growth. Anthropic and NVIDIA will collaborate on design and engineering, with the goal of optimizing Anthropic models for the best possible performance, efficiency, and TCO, and optimizing future NVIDIA architectures for Anthropic workloads. Anthropic’s compute commitment will initially be up to one gigawatt of compute capacity with NVIDIA Grace Blackwell and Vera Rubin systems.
Microsoft and Anthropic are also expanding their existing partnership to provide broader access to Claude for businesses. Customers of Microsoft Foundry will be able to access Anthropic’s frontier Claude models including Claude Sonnet 4.5, Claude Opus 4.1, and Claude Haiku 4.5. This partnership will make Claude the only frontier model available on all three of the world’s most prominent cloud services. Azure customers will gain expanded choice in models and access to Claude-specific capabilities.
Microsoft has also committed to continuing access for Claude across Microsoft’s Copilot family, including GitHub Copilot, Microsoft 365 Copilot, and Copilot Studio.
As part of the partnership, NVIDIA and Microsoft are committing to invest up to $10 billion and up to $5 billion respectively in Anthropic.
Anthropic co-founder and CEO Dario Amodei, Microsoft Chairman and CEO Satya Nadella, and NVIDIA founder and CEO Jensen Huang gathered to discuss the new partnerships:
The post Microsoft, NVIDIA and Anthropic announce strategic partnerships appeared first on The Official Microsoft Blog.
Anthropic to scale Claude on Azure Anthropic to adopt NVIDIA architecture NVIDIA and Microsoft to invest in Anthropic Today Microsoft, NVIDIA and Anthropic announced new strategic partnerships. Anthropic is scaling its rapidly-growing Claude AI model on Microsoft Azure, powered by NVIDIA, which will broaden access to Claude and provide Azure enterprise customers with expanded model choice and new capabilities. Anthropic…
The post Microsoft, NVIDIA and Anthropic announce strategic partnerships appeared first on The Official Microsoft Blog.Read More
Infinite scale: The architecture behind the Azure AI superfactory
Today, we are unveiling the next Fairwater site of Azure AI datacenters in Atlanta, Georgia. This purpose-built datacenter is connected to our first Fairwater site in Wisconsin, prior generations of AI supercomputers and the broader Azure global datacenter footprint to create the world’s first planet-scale AI superfactory. By packing computing power more densely than ever before, each Fairwater site is built to efficiently meet unprecedented demand for AI compute, push the frontiers of model intelligence and empower every person and organization on the planet to achieve more.
To meet this demand, we have reinvented how we design AI datacenters and the systems we run inside of them. Fairwater is a departure from the traditional cloud datacenter model and uses a single flat network that can integrate hundreds of thousands of the latest NVIDIA GB200 and GB300 GPUs into a massive supercomputer. These innovations are a product of decades of experience designing datacenters and networks, as well as learnings from supporting some of the largest AI training jobs on the planet.
While the Fairwater datacenter design is well suited for training the next generation of frontier models, it is also built with fungibility in mind. Training has evolved from a single monolithic job into a range of workloads with different requirements (such as pre-training, fine-tuning, reinforcement learning and synthetic data generation). Microsoft has deployed a dedicated AI WAN backbone to integrate each Fairwater site into a broader elastic system that enables dynamic allocation of diverse AI workloads and maximizes GPU utilization of the combined system.
Below, we walk through some of the exciting technical innovations that support Fairwater, from the way we build datacenters to the networking within and across the sites.
Maximum density of compute
Modern AI infrastructure is increasingly constrained by the laws of physics. The speed of light is now a key bottleneck in our ability to tightly integrate accelerators, compute and storage with performant latency. Fairwater is designed to maximize the density of compute to minimize latency within and across racks and maximize system performance.
One of the key levers for driving density is improving cooling at scale. AI servers in the Fairwater datacenters are connected to a facility-wide cooling system designed for longevity, with a closed-loop approach that reuses the liquid continuously after the initial fill with no evaporation. The water used in the initial fill is equivalent to what 20 homes consume in a year and is only replaced if water chemistry indicates it is needed (it is designed for 6-plus years), making it extremely efficient and sustainable.
Liquid-based cooling also provides much higher heat transfer, enabling us to maximize rack and row-level power (~140kW per rack, 1,360 kW per row) to pack compute as densely as possible inside the datacenter. State-of-the-art cooling also helps us maximize utilization of this dense compute in steady-state operations, enabling large training jobs to run performantly at high scale. After cycling through a system of cold plate paths across the GPU fleet, heat is dissipated by one of the largest chiller plants on the planet.

Another way we are driving compute density is with a two-story datacenter building design. Many AI workloads are very sensitive to latency, which means cable run lengths can meaningfully impact cluster performance. Every GPU in Fairwater is connected to every other GPU, so the two-story datacenter building approach allows for placement of racks in three dimensions to minimize cable lengths, which in turn improves latency, bandwidth, reliability and cost.

High-availability, low-cost power
We are pushing the envelope in serving this compute with cost-efficient, reliable power. The Atlanta site was selected with resilient utility power in mind and is capable of achieving 4×9 availability at 3×9 cost. By securing highly available grid power, we can also forgo traditional resiliency approaches for the GPU fleet (such as on-site generation, UPS systems and dual-corded distribution), driving cost savings for customers and faster time-to-market for Microsoft.
We have also worked with our industry partners to codevelop power-management solutions to mitigate power oscillations created by large scale jobs, a growing challenge in maintaining grid stability as AI demand scales. This includes a software-driven solution that introduces supplementary workloads during periods of reduced activity, a hardware-driven solution where the GPUs enforce their own power thresholds and an on-site energy storage solution to further mask power fluctuations without utilizing excess power.
Cutting-edge accelerators and networking systems
Fairwater’s world-class datacenter design is powered by purpose-built servers, cutting-edge AI accelerators and novel networking systems. Each Fairwater datacenter runs a single, coherent cluster of interconnected NVIDIA Blackwell GPUs, with an advanced network architecture that can scale reliably beyond traditional Clos network limits with current-gen switches (hundreds of thousands of GPUs on a single flat network). This required innovation across scale-up networking, scale-out networking and networking protocol.
In terms of scale-up, each rack of AI accelerators houses up to 72 NVIDIA Blackwell GPUs, connected via NVLink for ultra-low-latency communication within the rack. Blackwell accelerators provide the highest compute density available today, with support for low-precision number formats like FP4 to increase total FLOPS and enable efficient memory use. Each rack provides 1.8 TB of GPU-to-GPU bandwidth, with over 14 TB of pooled memory available to each GPU.

These racks then use scale-out networking to create pods and clusters that enable all GPUs to function as a single supercomputer with minimal hop counts. We achieve this with a two-tier, ethernet-based backend network that supports massive cluster sizes with 800 Gbps GPU-to-GPU connectivity. Relying on a broad ethernet ecosystem and SONiC (Software for Open Network in the Cloud – which is our own operating system for our network switches) also helps us avoid vendor lock-in and manage cost, as we can use commodity hardware instead of proprietary solutions.
Improvements across packet trimming, packet spray and high-frequency telemetry are core components of our optimized AI network. We are also working to enable deeper control and optimization of network routes. Together, these technologies deliver advanced congestion control, rapid detection and retransmission and agile load balancing, ensuring ultra-reliable, low-latency performance for modern AI workloads.
Planet scale
Even with these innovations, compute demands for large training jobs (now measured in trillions of parameters) are quickly outpacing the power and space constraints of a single facility. To serve these needs, we have built a dedicated AI WAN optical network to extend Fairwater’s scale-up and scale-out networks. Leveraging our scale and decades of hyperscale expertise, we delivered over 120,000 new fiber miles across the US last year — expanding AI network reach and reliability nationwide.
With this high-performance, high-resiliency backbone, we can directly connect different generations of supercomputers into an AI superfactory that exceeds the capabilities of a single site across geographically diverse locations. This empowers AI developers to tap our broader network of Azure AI datacenters, segmenting traffic based on their needs across scale-up and scale-out networks within a site, as well as across sites via the continent spanning AI WAN.
This is a meaningful departure from the past, where all traffic had to ride the scale-out network regardless of the requirements of the workload. Not only does it provide customers with fit-for-purpose networking at a more granular level, it also helps create fungibility to maximize the flexibility and utilization of our infrastructure.
Putting it all together
The new Fairwater site in Atlanta represents the next leap in the Azure AI infrastructure and reflects our experience running the largest AI training jobs on the planet. It combines breakthrough innovations in compute density, sustainability and networking systems to efficiently serve the massive demand for computational power we are seeing. It also integrates deeply with other AI datacenters and the broader Azure platform to form the world’s first AI superfactory. Together, these innovations provide a flexible, fit-for-purpose infrastructure that can serve the full spectrum of modern AI workloads and empower every person and organization on the planet to achieve more. For our customers, this means easier integration of AI into every workflow and the ability to create innovative AI solutions that were previously unattainable.
Find out more about how Microsoft Azure can help you integrate AI to streamline and strengthen development lifecycles here.
Scott Guthrie is responsible for hyperscale cloud computing solutions and services including Azure, Microsoft’s cloud computing platform, generative AI solutions, data platforms and information and cybersecurity. These platforms and services help organizations worldwide solve urgent challenges and drive long-term transformation.
The post Infinite scale: The architecture behind the Azure AI superfactory appeared first on The Official Microsoft Blog.
Today, we are unveiling the next Fairwater site of Azure AI datacenters in Atlanta, Georgia. This purpose-built datacenter is connected to our first Fairwater site in Wisconsin, prior generations of AI supercomputers and the broader Azure global datacenter footprint to create the world’s first planet-scale AI superfactory. By packing computing power more densely than ever…
The post Infinite scale: The architecture behind the Azure AI superfactory appeared first on The Official Microsoft Blog.
Read More
Bridging the AI divide: How Frontier firms are transforming business
Across every industry, leaders are asking: How can AI be used to fundamentally transform our business? At the forefront are Frontier firms — empowering human ambition and finding AI-first differentiation in everything to maximize their potential and impact on society. These firms are redefining what’s possible and setting the pace for the future.
To better understand this transformation, Microsoft commissioned a global study with the International Data Corporation (IDC) of more than 4,000 business leaders responsible for AI decisions. The findings reveal 68% of these companies are using AI today but the real difference lies in how they’re using it. Frontier firms, the ones leading in AI Transformation, report they are achieving returns that are three times higher than slow adopters.
What sets Frontier firms apart
Their success goes beyond efficiency and productivity at scale, driving growth, expansion and industry leadership in a new AI-powered economy. Based on the IDC study, Microsoft has identified five key lessons learned in becoming a Frontier firm and how organizations can transform their business with AI.
#1 EXPANDING AI IMPACT ACROSS EVERY BUSINESS FUNCTION
On average Frontier firms are using AI across seven business functions. Over 70% are using AI in customer service, marketing, IT, product development and cybersecurity. These functions benefit from AI’s ability to automate workflows, generate content and detect anomalies in real time. This broad adoption is translating into measurable business impact: Frontier firms report better outcomes at a rate that is 4X greater than slow adopters across brand differentiation (87%), cost efficiency (86%), top-line growth (88%) and customer experience (85%).
BlackRock is transforming its investment lifecycle with Microsoft AI integrated into its Aladdin platform. Embedded across 20 apps and used by tens of thousands of users, AI tools help client relationship managers save hours per client by generating personalized briefs and opportunity analyses, while portfolio managers access real-time analytics and research summaries through Aladdin Copilot. The result is faster insights, improved data quality and enhanced risk management; helping BlackRock and its clients gain an advantage while enhancing client service, compliance and portfolio management.
#2: UNLOCKING INDUSTRY-SPECIFIC VALUE
While many organizations start their AI journey with personal productivity gains like automating tasks and improving efficiency, Frontier firms are moving further, deploying AI for strategic, industry-specific applications. According to the study, 67% are monetizing industry-specific AI use cases to boost revenue.
Industries at the forefront of this transformation include financial services, healthcare and manufacturing. Each is finding powerful, practical ways to apply AI to its most complex challenges. In financial services, organizations are strengthening fraud detection, accelerating transaction reconciliation and elevating customer support. In healthcare, it is helping clinicians generate accurate documentation, assist in diagnostics and deliver more personalized care. In manufacturing, AI is driving predictive maintenance, optimizing production schedules and automating quality inspections.
Mercedes-Benz is scaling AI across its global production network to advance automotive innovation, stabilize supply chain volatility, simplify production complexity and meet sustainability demands. Its MO360 data platform connects more than 30 car plants worldwide to the Microsoft Cloud for real-time data access, global optimization and analytics. The Digital Factory Chatbot Ecosystem uses a multi-agent system to empower employees with collaborative insights. Paint Shop AI leverages machine learning simulations to diagnose efficiency declines and reduce energy consumption of the buildings and machines — including 20% energy savings in the Rastatt paint shop — and NVIDIA Omniverse on Azure powers digital twins for agile planning and continuous improvement.
#3: BUILDING CUSTOM AI SOLUTIONS FOR COMPETITIVE ADVANTAGE
Today, 58% of Frontier firms are using custom AI solutions. Custom AI solutions allow businesses to embed proprietary knowledge, tone and compliance into every interaction. They can be fine-tuned on proprietary data or industry-specific knowledge, enabling higher accuracy in predictions or content generation and better alignment with business goals and compliance needs.
Within the next 24 months, 77% of Frontier firms plan to use custom AI solutions. This reflects a growing trend that AI leaders are layering in deeper strategic integrations of AI across their business.
As customers seek to use AI more to shop and search for products, luxury lifestyle company Ralph Lauren developed a personal, frictionless, inspirational and accessible solution to blend fashion with cutting-edge AI. Working with Microsoft, Ralph Lauren developed Ask Ralph: an AI-powered conversational tool providing styling tips and outfit recommendations from across the Polo Ralph Lauren brand. Powered by Azure OpenAI, the AI tool uses a natural language search engine to adapt dynamically to specific language inputs and interpret user intent to improve accuracy. It supports complex queries with exploratory or nuanced information needs with contextual understanding; and can discern tone, satisfaction and intent to refine recommendations. The tool also picks up on cues like location-based insights or event-driven needs. With Ask Ralph, customers can now reimagine how they shop online by putting the brand’s unique and iconic take on style right into their own hands.
#4: AGENTIC AI: THE NEW DIFFERENTIATOR FOR BUSINESS LEADERS
Agentic AI — systems that can reason, plan and act with human guidance — is fast becoming the next defining capability of Frontier organizations. In the next two years, IDC estimates the number of companies using agentic AI will triple.
Leaders today face a familiar challenge — teams are operating at full capacity, yet the demand for innovation and impact continues to grow. That’s where AI agents come in. In finance, they can surface real-time insights, provide policy guidance, review deal documents and assist in sourcing suppliers. In sales, agents are becoming always-on teammates — building pipelines, unifying insights across CRM systems, meetings, emails and the web and helping sellers qualify leads and draft personalized outreach. In customer service, AI agents can manage cases, maintain knowledge accuracy and interpret customer intent.
Dow is using agents to automate the shipping invoice analysis process and streamline its global supply chain to unlock new efficiencies and value. Receiving more than 100,000 shipping invoices via PDF each year, Dow built an autonomous agent in Copilot Studio to scan for billing inaccuracies and surface them in a dashboard for employee review. Using Freight Agent — a second agent built in Copilot Studio — employees can investigate further by “dialoguing with the data” in natural language. The agents are helping employees solve the challenge of hidden losses autonomously within minutes rather than weeks or months. Dow expects to save millions of dollars on shipping costs through increased accuracy in logistic rates and billing within the first year.
#5: AI BUDGETS ARE GROWING AND SO IS THE TEAM BEHIND THEM
71% of respondents plan to increase their AI budgets, with funding coming from IT and non-IT sources. These investments are no longer confined to the IT department or the Chief Digital Officer’s office.
To truly unlock AI’s transformational potential, it requires everyone collaborating across functions to drive innovation, adoption and impact: 34% of respondents are adding net new investment, 24% are repurposing existing IT budgets and 13% are reallocating funds from non-IT areas such as operations, HR or marketing. This diversified funding strategy signals that AI is no longer viewed as a niche technology — it’s becoming a core enabler of enterprise-wide transformation.
“IDC is projecting that the global economic impact of AI is projected to reach $22.3 trillion by 2030 (3.7% of global GDP in 2030), estimating the return on AI investments requires both strong measurement capabilities and a robust business case — one that models both cost implications and the potential for responsible value creation,” said David Schubmehl, Vice President AI and Automation for IDC.
The AI imperative: Act now to lead the future
The opportunity to demand more from AI is now. Among organizations surveyed, 22% are Frontier firms, realizing measurable impact and moving with speed, while 39% risk falling behind. Many are navigating challenges around security, privacy, governance and cost, as well as ethical considerations, integration complexity and scaling from pilot to production.
The message is clear: those who embrace AI benefit from momentum in efficiency, customer experience and innovation. To stay competitive, leaders should act now and embrace AI not as an experiment but as a strategic imperative for growth.
Closing the gap: Start your transformation today
Success starts with investment, governance and organizational readiness. Having a robust infrastructure that is secure, reliable and scalable to support AI initiatives is critical. The emergence of Frontier firms shows that customized AI deployment and responsible oversight can drive ROI and innovation.
Explore how Microsoft’s AI solutions can transform your organization. Leverage our resources to innovate with AI and start your journey to becoming a Frontier firm.
Alysa Taylor is the Chief Marketing Officer for Commercial Cloud and AI at Microsoft, leading teams that enable digital and AI transformation for organizations of all sizes across the globe. She is at the forefront of helping organizations around the world harness digital and AI innovation to transform how they operate and grow.
NOTE
IDC InfoBrief: sponsored by Microsoft, What Every Company Can Learn From Frontier firms Leading the AI Revolution, IDC # US53838325, November 2025
The post Bridging the AI divide: How Frontier firms are transforming business appeared first on The Official Microsoft Blog.
Across every industry, leaders are asking: How can AI be used to fundamentally transform our business? At the forefront are Frontier firms — empowering human ambition and finding AI-first differentiation in everything to maximize their potential and impact on society. These firms are redefining what’s possible and setting the pace for the future. To better…
The post Bridging the AI divide: How Frontier firms are transforming business appeared first on The Official Microsoft Blog.
Read More
Beware of double agents: How AI can fortify — or fracture — your cybersecurity
AI is rapidly becoming the backbone of our world, promising unprecedented productivity and innovation. But as organizations deploy AI agents to unlock new opportunities and drive growth, they also face a new breed of cybersecurity threats.
There are a lot of Star Trek fans here at Microsoft, including me. One of our engineering leaders gifted me a life-size cardboard standee of Data that lurks next to my office door. So, as I look at that cutout, I think about the Great AI Security Dilemma: Is AI going to be our best friend or our worst nightmare? Drawing inspiration from the duality of the android officer Data, and his evil twin Lore in the Star Trek universe, today’s AI agents can either fortify your cybersecurity defenses — or, if mismanaged — fracture them.
The influx of agents is real. IDC research[1] predicts there will be 1.3 billion agents in circulation by 2028. When we think about our agentic future in AI, the duality of Data and Lore seems like a great way to think about what we’ll face with AI agents and how to avoid double agents that upend control and trust. Leaders should consider three principles and tailor them to fit the specific needs of their organizations.
1. Recognize the new attack landscape
Security is not just an IT issue — it’s a board-level priority. Unlike traditional software, AI agents are even more dynamic, adaptive and likely to operate autonomously. This creates unique risks.
We must accept that AI can be abused in ways beyond what we’ve experienced with traditional software. We employ AI agents to perform well-meaning tasks, but those with broad privileges can be manipulated by bad actors to misuse their access, such as leaking sensitive data via automated actions. We call this the “Confused Deputy” problem. AI Agents “think” in terms of natural language where instructions and data are tightly intertwined, much more than in typical software we interact with. The generative models agents depend on dynamically analyze the entire soup of human (or even non-human) languages, making it hard to distinguish well-known safe operations from new instructions introduced through malicious manipulation. The risk grows even more when shadow agents — unapproved or orphaned — enter the picture. And as we saw in Bring Your Own Device (BYOD) and other tech waves, anything you cannot inventory and account for magnifies blind spots and drives risk ever upward.
2. Practice Agentic Zero Trust
AI agents may be new as productivity drivers, but they can still be managed effectively using established security principles. I’ve had great conversations about this here at Microsoft with leaders like Mustafa Suleyman, cofounder of DeepMind and now Executive Vice President and CEO of Microsoft AI. Mustafa frequently shares a way to think about this, which he outlined in his book The Coming Wave, in terms of Containment and Alignment.
Containment simply means we do not blindly trust our AI Agents, and we significantly box every aspect of what they do. For example, we cannot let any agent’s access privileges exceed its role and purpose — it’s the same security approach we take to employee accounts, software and devices, what we refer to as “least privilege.” Similarly, we contain by never implicitly trusting what an agent does or how it communicates — everything must be monitored — and when this isn’t possible, agents simply are not permitted to operate in our environment.
Alignment is all about infusing positive control of an AI agent’s intended purpose, through its prompts and the models it uses. We must only use AI agents trained to resist attempts at corruption, with standard and mission-specific safety protections built into both the model itself and the prompts used to invoke the model. AI agents must resist attempts to divert them from their approved uses. They must execute in a Containment environment that watches closely for deviation from their intended purpose. All this requires strong AI agent identity and clear accountable ownership within the organization. As part of AI governance, every agent must have an identity, and we must know who in the organization is accountable for its aligned behavior.
Containment (least privilege) and Alignment will sound familiar to enterprise security teams, because they align with some of the basic principles of Zero Trust. Agentic Zero Trust includes “assuming breach,” or never implicitly trusting anything, making humans, devices and agents verify who they are explicitly before they gain access and limiting their access to only what’s needed to perform a task. While Agentic Zero Trust ultimately includes deeper security capabilities, discussing Containment and Alignment is a good shorthand in security-in-AI strategy conversations with senior stakeholders to keep everyone grounded in managing the new risk. Agents will keep joining and adapting at work — some may become double agents. With proper controls, we can protect ourselves.
3. Foster a culture of secure innovation
Technology alone won’t solve AI security. Culture is the real superpower in managing cyber risk — and leaders have the unique ability to shape it. Start with open dialogue: make AI risks and responsible use part of everyday conversations. Keep it cross-functional: legal, compliance, HR and others should have a seat at the table. Invest in continuous education: train teams on AI security fundamentals and clarify policies to cut through noise. Finally, embrace safe experimentation: give people approved spaces to learn and innovate without creating risk.
Organizations that thrive will treat AI as a teammate, not a threat — building trust through communication, learning and continuous improvement.
The path forward: What every company should do
AI isn’t just another chapter — it’s a plot twist that changes everything. The opportunities are huge, but so are the risks. The rise of AI requires ambient security, which executives create by making cybersecurity a daily priority. This means blending robust technical measures with ongoing education and clear leadership so that security awareness influences every choice made. Organizations maintain ambient security when they:
- Make AI security a strategic priority.
- Insist on Containment and Alignment for every agent.
- Mandate identity, ownership and data governance.
- Build a culture that champions secure innovation.
And it will be important to take a set of practical steps:
- Assign every AI agent an ID and owner — just like employees need badges. This ensures traceability and control.
- Document each agent’s intent and scope.
- Monitor actions, inputs and outputs. Map data flows early to set compliance benchmarks.
- Keep agents in secure, sanctioned environments — no rogue “agent factories.”
The call to action for every business is: Review your AI governance framework now. Demand clarity, accountability and continuous improvement. The future of cybersecurity is human plus machine — lead with purpose and make AI your strongest ally.
At Microsoft, we know we have a huge role to play in empowering our customers in this new era. In May, we introduced Microsoft Entra Agent ID as a way to help customers place unique identities to agents from the moment they are created in Microsoft Copilot Studio and Azure AI Foundry. We leverage AI in Defender and Security Copilot, combined with the massive security signals we collect, to expose and defeat phishing campaigns and other attacks that cybercriminals may use as entry points to compromise AI agents. We’ve also been committed to a platform approach with AI agents, to help customers safely use both Microsoft and third-party agents on their journey, avoiding complexity and risk that come from needing to juggle excessive dashboards and management consoles.
I’m excited by several other innovations we will be sharing at Microsoft Ignite later this month, alongside customers and partners.
We may not be conversing with Data on the bridge of the USS Enterprise quite yet, but as a technologist, it’s never been more exciting than watching this stage of AI’s trajectory in our workplaces and lives. As leaders, understanding the core opportunities and risks helps create a safer world for humans and agents working together.
Notes
[1] IDC Info Snapshot, sponsored by Microsoft, 1.3 Billion AI Agents by 2028, May 2025 #US53361825
The post Beware of double agents: How AI can fortify — or fracture — your cybersecurity appeared first on The Official Microsoft Blog.
AI is rapidly becoming the backbone of our world, promising unprecedented productivity and innovation. But as organizations deploy AI agents to unlock new opportunities and drive growth, they also face a new breed of cybersecurity threats. There are a lot of Star Trek fans here at Microsoft, including me. One of our engineering leaders gifted…
The post Beware of double agents: How AI can fortify — or fracture — your cybersecurity appeared first on The Official Microsoft Blog.Read More
Becoming Frontier: How human ambition and AI-first differentiation are helping Microsoft customers go further with AI
Over the past few years, we have driven remarkable progress accelerating AI innovation together with our customers and partners. We are achieving efficiency and productivity at scale to shape industries and markets around the world. It is time to demand more of AI to solve humanity’s biggest challenges by democratizing intelligence, obsolescing the mundane and unlocking creativity. This is the notion of becoming Frontier: to empower human ambition and find AI-first differentiation in everything we do to maximize an organization’s potential and our impact on society.
Microsoft’s technology portfolio ensures our customers can go further with AI on their way to becoming Frontier firms, using our AI Transformation success framework as their guide. Our AI business solutions are dramatically changing how people gain actionable insights from data — fusing the capabilities of AI agents and Copilots while keeping humans at the center. We have the largest, most scalable, most capable cloud and AI platform in the industry for our customers to build upon their aspirations. We remain deeply focused on ensuring AI is used responsibly and securely, and embed security into everything we do to help our customers prioritize cybersecurity and guard against threats.
We are fortunate to work with thousands of customers and partners around the world — across every geography and industry. I am pleased to share some of the customer stories being showcased at our recently opened Experience Center One facility — each exemplifying the path to becoming Frontier.
Driven by a commitment to innovation, sustainability and operational excellence, ADNOC is helping meet the world’s growing energy demands safely and reliably, while accelerating decarbonization efforts. To empower its workforce, the company introduced OneTalent — a unified AI-powered platform consolidating over 16 legacy HR processes into a single, intelligent system that furthers its dedication to nurturing talent, aligning people with strategic goals and turning every member of its workforce into an AI collaborator. Partnering with Microsoft and AIQ, ADNOC applied AI across its operations to reimagine everything from seismic analysis to predictive maintenance. ENERGYai and Neuron 5 — AI-powered platforms built natively on Azure OpenAI — turn complexity into actionable insights. The platforms use predictive models to reduce downtime — by as much as 50% at one plant. They are also using autonomous agents to optimize energy use; unlocking data-driven insights that have accelerated energy workflows from months or years to just days or minutes.
Asset manager and technology provider BlackRock has been on a journey to infuse AI to level up how its organization operates across three key pillars: how they invest, how they operate and how they serve clients. To accelerate this mission, they partnered with Microsoft to transform processes across the investment management lifecycle by integrating cloud and AI technologies alongside its Aladdin platform. Embedded across 20 applications and accessed by tens of thousands of users, the Aladdin platform’s AI capabilities deliver functionally relevant tools to help redefine workflows for different types of financial service professionals. Client relationship managers are saving hours per client, reducing duplication and improving accuracy by evaluating CRM and market data to generate personalized client briefs and opportunity analyses using natural language processing — supported by verification and review methods that facilitate accuracy and compliance. Investment compliance officers are streamlining portfolio onboarding and compliance guideline coding, saving time on more straightforward tasks to focus on complex, investigative tasks. Portfolio managers can access data, analytics, research summaries, cash balances and more through AI-powered chat capabilities; enabling faster, more informed decision-making aligned with client mandates. With accelerated insights, improved data quality and enhanced risk management, BlackRock and its clients gain an advantage while enhancing client service, compliance and portfolio management.
To build on its culture of innovation and enable hyper-relevant messaging at scale, multinational advertising and media agency dentsu built a cutting-edge solution using Azure OpenAI: dentsu.Connect — a unified OS for its applications. By leveraging the power of AI across the entire campaign lifecycle, clients can build and execute campaigns while predicting marketers’ next best impact with confidence and precision. This end-to-end platform drives data connectivity and ensures seamless interoperability with clients’ technology and data stacks to maximize and drive brand relevance across content, production and media activation while aligning every action with business goals. dentsu.Connect helps minimize the gap between insights and action with speed and precision. Since launching, users have increased operational efficiency by 25%, improved business outcomes by 30% and quickened decision-making and data-driven AI insight generation by 125X.
Water management solutions and services partner Ecolab is harnessing the power of data-driven solutions to enable organizations to reduce water consumption, maximize system performance and optimize operating costs. Using Microsoft Azure and IoT services, the company built ECOLAB3D: an intelligent cloud platform that unifies diverse and dispersed IoT data to visualize and optimize water systems remotely. By providing actionable insights for real-time optimization across multiple assets and sites, Ecolab partners with global leaders such as Microsoft to collectively drive hundreds of millions in operational savings — while conserving more than 226 billion gallons of water annually; equivalent to the drinking water needs of nearly 800 million people. Delivering solutions across diverse industries, Ecolab is also a trusted partner for foodservice locations, helping balance labor costs with customer satisfaction. Its cloud-based platform Ecolab RushReady transforms data into an AI-enabled dashboard that improves daily operations by delivering actionable insights. In an Ecolab customer case study, this helped improve speed of service and sales labor per hour, resulting in increased profit of more than 10%. From data centers to dining rooms, Ecolab delivers intelligent, scalable solutions that transform operations for greater efficiency and measurable impact.
Leveraging Microsoft’s AI solutions across its portfolio, Epic built agentic “personas” to support care teams and patients, improve operations and financial performance and advance the practice of medicine. By summarizing patient records and automatically drafting clinical notes, one organization found that “Art” decreased after-hours documentation for clinicians by 60%, reduced burnout by 82% and helped them focus more on patient care. Care teams can also track long-term patient health and better plan treatment for chronic conditions, while nurses can perform wound image analysis automatically with 72% greater precision than manual methods. At one hospital, AI review of routine chest X-rays led to earlier discovery of over 100 cases of lung cancer, increasing the detection rate to 70% compared to the 27% national average. To support back-end operations, organizations are using “Penny” to improve the revenue cycle — resulting in $3.4 million in additional revenue at one regional network services provider. Epic also developed “Emmie” to have conversational interactions with patients and more easily help them schedule appointments and ask questions. Epic is leveraging Azure Fabric for the Cosmos platform to bring together anonymized data from more than 300 million patients, including 13 million with rare diseases, so physicians can connect with peers who have treated similar cases to improve rare disease diagnosis and select the most effective treatment.
To reduce professional burnout and accelerate scale across the industry, Harvey built an AI platform to automate legal research, contract reviews and document analysis. Harvey Assistant assists attorney searches across large document sets to identify specific clauses or provisions within seconds instead of hours. To support large-scale analysis, Harvey Vault manages and analyzes up to 100,000 files per project for complex tasks like litigation, while Harvey Workflows automates routine yet critical tasks into smaller AI-managed steps. With the integration of the newly expanded Microsoft Word add-in, AI capabilities provide legal teams with the ability to edit 100-plus page documents with a single query, enabling centrally controlled document compliance reviews that enhance efficiency while reducing risk. With more than 74,000 legal professionals using the platform, Harvey is helping them streamline workflows, reduce administrative burden and combat attorney fatigue — with the average user saving up to 25 hours of time per month.
To revolutionize drug discovery, biotech company Insilico Medicine is leveraging AI across its entire development pipeline — from target identification to molecule design and clinical trials. The company created Pharma.AI to accelerate research while reducing costs and improving success rates in emerging novel therapies — with developmental candidate timelines reduced from 2.5-4.5 years to 9-18 months for more than 20 therapeutic programs. The integrated AI platforms built with Azure AI Foundry manage complex biological data, identify disease-relevant targets and advance candidates to clinical trials — accelerating research in what is traditionally a slow, costly and complex pharmaceutical R&D process. They enable researchers to analyze genetic data and identify drug targets with AI-generated reports to facilitate business case development; use physics-based models to evaluate candidates for potency, safety and synthesizability; integrate with specialized large language models for drug discovery; and combine AI agents with structured workflows to reduce document drafting time by over 85% while improving first-pass quality of scientific documents by 60%.
To enhance manufacturing operations in a fast-paced and complex industry, global consumer foods producer Kraft Heinz partnered with Microsoft to embed AI and machine learning across its production facilities, resulting in smarter decision-making and operational improvements. The company built an AI-powered platform — Plant Chat — providing real-time insights on the factory floor and reducing downtime to enable faster, more confident decision-making with proactive guidance. The solution analyzes over 300 variables and allows operators to interact via natural language to improve consistency, reduce guesswork, decrease waste and maintain compliance — even for less experienced operators. Since implementation and collectively with other initiatives, these efforts have resulted in a 40% reduction in supply-chain waste, a 20% increase in sales forecast accuracy and a 6% product-yield improvement across all North American manufacturing sites through the third quarter of 2024. Combined with further operational improvements, this work has yielded more than $1.1 billion in gross efficiencies from 2023 through the third quarter of 2024.
To redefine work and scale intelligent automation globally, digital native Manus AI developed an advanced autonomous AI system designed to understand user intent and execute complex workflows independently across various domains. The solution leverages a multi-agent architecture through Microsoft Azure AI Foundry to deliver scalable, versatile task automation for millions of users worldwide. Its Wide Research capability deploys specialized sub-agents to rapidly perform large-scale, multi-dimensional research tasks; saving significant time and delivering actionable insights to make complex analysis accessible and efficient for strategic decision-making. Manus AI can also build dynamic dashboards so organizations can visualize trends, anomalies and market insights in real-time; driving strategic planning with reliable, up-to-date information. The multimodel image editing and creation capabilities also allow users to support brand consistency and enable marketers and product teams to iterate rapidly.
To advance automotive innovation, stabilize supply chain volatility, simplify production complexity and meet sustainability demands, Mercedes-Benz scaled AI innovation across its global production network. The MO360 data platform connects over 30 car plants worldwide to the Microsoft Cloud, enabling real-time data access, global optimization and analytics. The Digital Factory Chatbot Ecosystem uses a multi-agent system to empower employees with collaborative insights, and Paint Shop AI leverages machine learning simulations to diagnose efficiency declines and reduce energy consumption of the buildings and machines — including 20% energy savings in the Rastatt paint shop. Using NVIDIA Omniverse on Azure, Mercedes-Benz created large-scale factory digital twins for visualization, testing and optimization of production lines — enabling agile planning and continuous improvement. The MBUX Virtual Assistant embedded in over 3 million vehicles, powered by Microsoft’s ChatGPT and Bing Search, offers natural, conversational voice interactions and integrates Microsoft 365 Copilot with Teams directly into vehicles to enable mobile workspaces.
U.S. stock exchange and financial services technology company Nasdaq integrated AI capabilities into its Nasdaq Boardvantage platform to help corporate governance teams and board members save time, reduce information overload, improve decision-making and enhance board meeting preparation and governance workflows. The board management platform is used by leadership teams at over 4,000 organizations worldwide to centralize activities like meeting planning, agenda building, decision support, resolution approval, voting and signatures. Using Azure OpenAI GPT-4o mini, the AI Summarization feature helps board secretaries significantly reduce manual effort, saving hundreds of hours annually with accuracy between 91% to 97%. AI Meeting Minutes helps governance teams draft minutes by processing agendas, documents and notes while allowing for customization of length, tone and anonymization; accelerating post-meeting workflows and saving up to five hours per meeting.
As customers seek to use AI more to shop and search for products, luxury lifestyle company Ralph Lauren developed a personal, frictionless, inspirational and accessible solution to blend fashion with cutting-edge AI. Working with Microsoft, Ralph Lauren developed Ask Ralph: an AI-powered conversational tool providing styling tips and outfit recommendations from across the Polo Ralph Lauren brand. Powered by Azure OpenAI, the AI tool uses a natural language search engine to adapt dynamically to specific language inputs and interpret user intent to improve accuracy. It supports complex queries with exploratory or nuanced information needs with contextual understanding; and can discern tone, satisfaction and intent to refine recommendations. The tool also picks up on cues like location-based insights or event-driven needs. With Ask Ralph, customers can now reimagine how they shop online by putting the brand’s unique and iconic take on style right into their own hands.
Industrial automation and digital transformation expert Rockwell Automation is integrating AI and advanced analytics into its products to help manufacturers adapt seamlessly to market changes, reduce risk and develop agentic AI capabilities to support innovation and growth. FactoryTalk Design Studio
Copilot, a cloud-based environment for programming, enables rapid updates to code for evolving production needs — reducing complex coding tasks from days to minutes. Rockwell’s digital twin software, Emulate3D®, creates physics-based models for virtual testing of automation code and AI, reducing costly real-world errors and production risks while cutting on-site commissioning times by 50%. With the integration of NVIDIA Omniverse — a collaborative, large-scale digital twin platform — users can perform multi-user factory design and testing to facilitate cross-disciplinary collaboration, address industry challenges and unlock opportunities through digital simulation before real-world deployment.
To enable a cleaner, more resilient energy future, Schneider Electric is powering AI-driven industry innovation by addressing grid stability and enterprise sustainability challenges. Built using Microsoft Azure, the company developed solutions for organizations to act faster and smarter while delivering measurable improvements in grid reliability and enterprise ESG management. Resource Advisor Copilot transforms raw ESG and energy data into actionable insights via natural language queries to support knowledge-based and system data questions; saving sustainability managers hundreds of hours annually in data analysis and reporting tasks in early testing. Grid AI Assistant allows operators to interact with complex grids using natural language to improve response times and accuracy during critical events; reducing outages by 40% and speeding up application deployment by 60%. Schneider Electric’s integration of AI tools reflects a strategic approach to digitally transforming energy management, addressing both operational resilience and sustainability imperatives.
To enhance personalized learning, streamline operations and support educators with innovative technology, the State of São Paulo’s Department of Education (SEDUC) partnered with Microsoft to equip schools with cloud and AI solutions — including Azure OpenAI, Microsoft 365, Azure and Dynamics 365. SEDUC is applying responsible AI solutions at scale to address sector priorities like delivering timely, high-quality formative feedback and reducing repetitive administrative work. With Essay Grader, teachers automate portions of grading and receive suggested feedback, freeing time for lesson design and individual support. With Question Grader, students can answer questions more openly with their own perspectives and reasoning while still receiving curated feedback typically reserved for extensive exams. By leveraging these AI-powered solutions, SEDUC is improving learning outcomes, boosting efficiency and strengthening teacher impact — anchored in equity, transparency and sound governance.
Australia’s leading telecommunications company, Telstra, is transforming its customer service operations to improve the experience for its customers and the people that serve them. One of the biggest pain points for teams is navigating multiple systems to identify and resolve a customer issue — leading to long handling times and reliance on how team members interpret various data sources. By leveraging AI solutions built on Azure OpenAI and Microsoft 365 Copilot, the company is enabling instant knowledge access and streamlined workflows. With One Sentence Summary, agents have a concise overview of customer interactions to improve efficiency and customer satisfaction — reducing call handling time by over one minute and repeat contacts by nearly 10%. Ask Telstra provides AI-generated responses from Telstra’s knowledge base in near real-time to assist agents with accurate product, plan and troubleshooting information across a wide variety of topics during calls; facilitating seamless agent-customer interactions with AI assistance.
As one of the largest leading global automakers, Toyota is pioneering AI intelligence in manufacturing with O-beya System: a multi-agent AI system simulating expert discussions virtually. Based on decades of engineering knowledge, the solution fosters a collaborative project management approach to enhance problem-solving and innovation in vehicle development while identifying key challenges to help analyze and diagnose problems. O-beya can auto-select AI agents in fields like fuel efficiency, drivability, noise and vibration, energy management and power management to pinpoint causes and suggest solutions. The system also offers interactive features; including prompt history, term explanations and creative summaries to further enable engineers to explore and validate mitigation strategies efficiently. The system leverages Microsoft Azure OpenAI, Azure AI Search and Azure Cosmos DB to analyze internal design data and help Toyota accelerate innovation, preserve institutional knowledge and resolve complex engineering issues faster. Since January 2024, over 800 powertrain engineers have accessed the system, utilizing it hundreds of times monthly across multiple business units.
As we seek to help our customers realize their AI ambitions, our mission remains unchanged: to empower every person and every organization on the planet to achieve more. We are at our best as a company when we put our technology to work for others. As you move forward on your AI journey, ask what AI can do for your organization and what it means to demand more from it. Leveraging the Microsoft portfolio, together we can do more to positively impact society; going beyond efficiency and productivity to solve for humanity’s biggest challenges. I look forward to partnering with you on your path to becoming Frontier.
The post Becoming Frontier: How human ambition and AI-first differentiation are helping Microsoft customers go further with AI appeared first on The Official Microsoft Blog.
Over the past few years, we have driven remarkable progress accelerating AI innovation together with our customers and partners. We are achieving efficiency and productivity at scale to shape industries and markets around the world. It is time to demand more of AI to solve humanity’s biggest challenges by democratizing intelligence, obsolescing the mundane and…
The post Becoming Frontier: How human ambition and AI-first differentiation are helping Microsoft customers go further with AI appeared first on The Official Microsoft Blog.Read More












