Tag Archives: microsoft
A milestone achievement in our journey to carbon negative
In 2020, Microsoft announced a moonshot commitment to become carbon negative by 2030 — accelerating work across our company to advance the partnerships and technologies needed to advance sustainability for our businesses, our customers and the world. A key milestone on this journey was our aim to match 100% of our annual global electricity consumption with renewable energy(1) by 2025. Today, we are pleased to share that Microsoft has achieved this milestone(2). This progress helps drive investment into the power systems where we operate, expand clean energy supply and advance broader energy innovation.
Over a decade of investment: 40 gigawatts of new renewable energy contracted
What began in 2013 with a single 110 megawatt (MW) power purchase agreement (PPA) in Texas — a small first step to demonstrate how corporate procurement could scale clean energy(3) — has evolved into one of the largest clean energy portfolios in the world. This first deal not only supported Microsoft’s early cloud services but also set in motion a decade of commercial partnerships and learning-by-doing that served to demonstrate how corporate demand for advanced energy solutions can help to achieve a more affordable and sustainable power system, while supporting reliability for customers.
Since our carbon negative announcement in 2020, we have contracted 40 gigawatts (GW) of new renewable energy supply across 26 countries, working with more than 95 utilities and developers across 400+ contracts and counting. To put that amount in perspective — that’s enough energy to power about 10 million US homes. Of that contracted volume, 19 GW are now online, delivering new clean energy supply to the power grid, while the remainder are slated to come online over the next five years.
Our new renewable energy procurement continues to deliver significant environmental benefits, including the reduction of Microsoft’s reported Scope 2 carbon dioxide emissions by an estimated 25 million tons(4) and the mobilization of billions of dollars’ worth of private investment in regions where we operate.
Catalyzing market investment through bankable, repeatable models
Microsoft is among the early pioneers in developing technical and commercial practices that help advance bankable, repeatable and scalable procurement tools suitable for each market. Our clean energy purchasing navigates a global patchwork of power market designs, requiring creativity in how we balance cost, time to market and project sizing in our portfolio across planning, contracting and management.
Our work has benefited from a broad coalition of partners helping to build this market together. According to Bloomberg New Energy Finance, more than 200 global corporations collectively purchased nearly 200 GW of clean energy around the world since 2008. Working alongside other clean energy buyers — as well as hundreds of utilities, manufacturers, financiers, developers and engineers — we have helped reduce transaction costs, expand developer access to financing and streamline procurement approaches that other buyers can adopt.
This global flywheel of partnership, investment, technology and policy innovation is expected to continue to facilitate billions of dollars’ worth of investment into infrastructure and jobs. And as we’ve seen repeatedly, when Microsoft sends a clear market signal for world-class, first-of-a-kind technologies and infrastructure, the power sector rises to the challenge. Our procurement over the past decade has demonstrated that partnerships, communities and innovation are essential ingredients that help to accelerate first-of-a-kind technologies and infrastructure at scale.
Scaling partnerships to scale infrastructure
Critical to Microsoft’s success in expanding digital infrastructure and supporting our local communities is our ability to build trusted partnerships with the over 95 global energy suppliers that support our clean energy portfolio. We have sourced clean energy through multiple requests for proposal or information, bilateral engagements and clean tariffs to evaluate over 5,000 unique carbon-free energy projects around the world.
Today, Microsoft has six energy company partners with which we have over 1 GW of contracted renewable energy capacity, and more than 20 energy supplier partners where each partner has at least five separate renewable energy projects with Microsoft — evidence of the durable, repeatable relationships necessary to scale clean energy. Combining scale with speed, Microsoft’s landmark 10.5 GW framework agreement with Brookfield sends a long-term, 2030 demand signal to the market that enables developers to raise funding more efficiently, bolster supply chains, hire engineers and construct world-class energy infrastructure.
Putting communities first
Our renewable energy procurement has mobilized billions of dollars in private investment, supported thousands of jobs across the communities where we operate and delivered meaningful co-benefits. Through partnerships with developers and nonprofit organizations, we’ve worked to embed community-driven benefits into our energy portfolio. These benefits include robust infrastructure, economic inclusion and support for community-focused organizations.
Our support for communities shows up in projects like our 500 MW PPA with Sol Systems, or our 250 MW PPA with Volt Energy Utility that provided local training and jobs, as well as grants to community nonprofit organizations and habitat restoration. We’ve also signed over 1.5 GW of distributed solar, bringing clean energy directly into hundreds of communities around the world. Landmark agreements like our 500 MW offtake with Pivot Energy, or our 270 MW offtake with PowerTrust are expected to foster employment, energy cost savings and grid resilience in communities across the United States, Mexico and Brazil. More details on the above examples and our approach to community benefits in clean energy agreements can be found in a dedicated Microsoft whitepaper.
Innovation unlocks new markets and pathways
Microsoft’s clean energy procurement continues to play an important role in catalyzing technical, commercial and regulatory innovation. Our commercial efforts have helped lower barriers to entry into new markets and expand access into multi-technology contracts that accelerate decarbonization.
In Japan, Microsoft signed one of the first corporate PPAs in the country’s restructured power market. Our 25 MW, 20-year agreement with Shizen represents the first single-asset virtual PPA executed in the country, which helped pave the way to over 2GWs of corporate procurement since 2024, according to Bloomberg New Energy Finance. Alongside opening new markets, we have structured several multi-technology offtakes in nascent markets for corporate procurement. In India, Microsoft purchased a combined 437 MW solar/wind hybrid offtake from Renew, where our projects will support energy access and rural electrification. In Microsoft’s home state of Washington, our datacenters in Douglas County are supplied by 100% carbon-free energy, as we leverage a creative blend of new wind power and hydropower storage to deliver around-the-clock clean energy.
Looking forward to 2030 and beyond
In 2025, the International Energy Agency (IEA) described a new “Age of Electricity,” marked by accelerating electricity demand from electric vehicles, air conditioners, data centers and heat pumps. As the world electrifies more of the economy, the demand for affordable, reliable and clean electricity will continue to rise.
Our experience building Microsoft’s clean energy portfolio both reflects and furthers global trends. According to IEA data, since 2000, renewable energy generation has expanded nearly four-fold. In many power markets across the world, clean energy is one of the fast-growing sources of generation, and often the one with the fastest time-to-market. Corporate buyers like Microsoft continue to serve as an important catalyst in driving commercial demand for innovation and infrastructure across the power industry.
As we continue our journey toward becoming carbon negative by 2030, Microsoft will continue to push for an expansive focus on adding all forms of carbon-free electricity solutions, complementing and adding to our portfolio of renewable energy resources. We recognize that the world’s rising electricity needs require a balanced, all-of-the-above decarbonization strategy to meet global economic growth and environmental goals, and our sustainability goals will continue to support this approach moving forward. Such a strategy requires a broader set of carbon-free energy and grid-enabling technologies, including nuclear energy, next-generation grid infrastructure and carbon capture technology. Just as renewable energy was a relatively small part of global energy grids in 2013 when we signed our first PPA, today many advanced energy technologies remain early in their development but offer significant promise to accelerate progress towards an affordable, reliable and sustainable energy future.
Microsoft has already taken early steps to support the advancement of a broader set of carbon-free energy technologies as we partner with Helion and Constellation Energy on a 50 MW fusion project in Washington state and work with Constellation to restart the 835 MW Crane Clean Energy Center in Pennsylvania. Microsoft’s Climate Innovation Fund has allocated $806 million of capital to 67 investees, with 38% directed toward Energy Systems — advancing carbon-free power and fuels, energy storage and energy management solutions.
We welcome continued collaboration with our power sector partners to bring these innovations to market and incorporate new technology tools in the process to accelerate their development.
We will continue to build and leverage new AI-driven tools to design, permit and deploy new power technologies that help expand and more efficiently operate the electricity grid, bringing more clean energy online faster. This work is exemplified by our recently announced collaborations with Idaho National Laboratory and the Midcontinental System Operator, among other examples.
And as we advance innovative energy technologies, we recognize that standards must evolve alongside innovation. That is why we will continue participating in industry forums that strengthen carbon accounting frameworks — so that our clean energy procurement is measured with greater accuracy and delivers real world emissions reductions, with a continued focus on maintaining the high level of integrity that the world has come to expect from Microsoft.
Our carbon negative commitment remains a call to action — for Microsoft, our customers and the broader technology sector — to invest in an affordable, reliable and sustainable power system. As we look toward 2030, that call to action has never been clearer.
Gratitude — and momentum for the work ahead
Today’s milestone represents a shared achievement among the utility professionals, clean energy developers, community leaders, technology innovators and forward-thinking policymakers who continue the deployment of renewable energy. Meeting today’s milestone shows what partnership can deliver in bringing big ideas to life. The future of carbon-free energy is one that we will create – together.
As Microsoft’s Chief Sustainability Officer, Melanie Nakagawa leads the company’s targets to be carbon negative, water positive, and zero waste by 2030. She brings deep experience at the intersection of policy, business, and technology to advance climate and sustainability solutions globally.
As President of Cloud Operations + Innovation at Microsoft, Noelle Walsh leads the organization that powers the global Microsoft Cloud. She oversees the company’s physical cloud infrastructure and operations, with a charter focused on safety, security, availability, sustainability, and competitive infrastructure growth—bringing decades of global operational leadership.
Footnotes
- Renewable energy is defined within Microsoft’s fact sheet https://aka.ms/SustainabilityFactsheet2025, which represents FY24 data.
- To date, Microsoft’s renewable energy target includes two primary categories: renewable energy from contracted projects and grid mix. The first is renewable energy delivered under PPAs or similar long-term contracting mechanisms, generally for new projects where our financial involvement in the project’s development is critical for its success. This category represents more than 90% of the renewable energy applied to achieve our 2025 target.The second category is “grid mix” – renewable energy supported via our standard utility relationships and rates, inclusive of policy programs such as renewable portfolio standards and state and utility decarbonization goals.Our 2025 100% renewable target does not include purchases from short-term, so-called “spot market” renewable energy credits (RECs) sourced from operational clean energy projects.With the above in mind, Microsoft leverages a straightforward formula to determine our 100% renewable energy metric on a global, annual basis. We update and further detail the methodology and assumptions behind this formula in our annual sustainability reports:

- Clean energy— also referred to in this blog as carbon free energy —is defined within Microsoft’s fact sheet https://aka.ms/SustainabilityFactsheet2025, which represents FY24 data.
- Reduction of reported Scope 2 emissions are calculated between FY20-25, the cumulative difference between location based and market-based emissions, excluding the use of short-term, so-called “spot market” RECs
The post A milestone achievement in our journey to carbon negative appeared first on The Official Microsoft Blog.
In 2020, Microsoft announced a moonshot commitment to become carbon negative by 2030 — accelerating work across our company to advance the partnerships and technologies needed to advance sustainability for our businesses, our customers and the world. A key milestone on this journey was our aim to match 100% of our annual global electricity consumption…
The post A milestone achievement in our journey to carbon negative appeared first on The Official Microsoft Blog.
Read More
Updates in two of our core priorities
Satya Nadella, Chairman and CEO, posted the below message to employees on Viva Engage this morning.
I am excited to share a couple updates in two of our core priorities: security and quality. Hayete Gallot is rejoining Microsoft as Executive Vice President, Security, reporting to me. I’ve also asked Charlie Bell to take on a new role focused on engineering quality, reporting to me.
Charlie and I have been planning this transition for some time, given his desire to move from being an org leader to being an IC engineer. And I love how energized he is to practice this craft here day in and day out!
Hayete joins us from Google where she was President, Customer Experience for Google Cloud. Before that, she spent more than 15 years at Microsoft with senior leadership roles across engineering and sales, playing multiple critical roles in building two of our biggest franchises – Windows and Office, leading our commercial solution areas’ go-to-market efforts. And she was instrumental in the design and implementation of our Security Solution Area. She brings an ethos that combines product building with value realization for customers, which is critical right now.
As we shared during our quarterly earnings last week, we have great momentum in security, including progress with Security Copilot agents, strong Purview adoption, and continued customer growth, and we will build on this.
We have a deep bench of talent and leaders across our security business, and this team will now report to Hayete. Additionally, Ales Holecek will take on a new role as Chief Architect for Security, reporting to Hayete. Ales has spent years leading architecture and development across some of our most important platforms and will help bring that same sensibility to security and its connections back to our existing scale businesses and the Agent Platform.
As we shared yesterday, we have a new operating rhythm with commercial cohorts, and Hayete and her team will now be accountable for our security product rhythms as part of this process.
Charlie built our Security, Compliance, Identity, and Management organization and helped rally the company behind the Secure Future Initiative. And we’re fortunate to have his continued focus and leadership on another one of our top priorities. With our Quality Excellence Initiative, we have increased accountability and accelerated progress against our engineering objectives to ensure we always deliver durable, high quality-experiences at global scale. And Charlie will partner closely with Scott Guthrie and Mala Anand on this work.
I’m excited to welcome Hayete back to Microsoft to advance this mission critical work, and grateful to Charlie for all he has done for our security business and what he will continue to do for the company.
Satya
The post Updates in two of our core priorities appeared first on The Official Microsoft Blog.
Satya Nadella, Chairman and CEO, posted the below message to employees on Viva Engage this morning. I am excited to share a couple updates in two of our core priorities: security and quality. Hayete Gallot is rejoining Microsoft as Executive Vice President, Security, reporting to me. I’ve also asked Charlie Bell to take on a…
The post Updates in two of our core priorities appeared first on The Official Microsoft Blog.Read More
How Microsoft is empowering Frontier Transformation with Intelligence + Trust
At Microsoft Ignite in November, we introduced Frontier Transformation — a holistic reimagining of business aligning AI with human ambition to help organizations achieve their highest aspirations and growth potential. While AI Transformation centered on efficiency and productivity, Frontier Transformation challenges us to do more for humanity by democratizing intelligence to unlock creativity and innovation for organizations and people around the world.
Across industries, our customers are leading the way to becoming Frontier; sharing three common traits anchored in a foundation of Intelligence + Trust. AI in the flow of human ambition, putting Copilots and agents directly in the tools people use; and ubiquitous innovation, empowering the maker in every role. These capabilities are served through Microsoft’s new intelligence layer: Work IQ, which understands how people work; Fabric IQ, which provides a trusted semantic layer for reasoning over an organization’s data; and Foundry IQ, the world’s leading AI app server powering safe, scalable agent experiences. Together, these capabilities put the “I” back in AI by grounding Copilots and agents in an organization’s own data, logic and workflows to fully understand operations and drive decisions that matter most. The third trait is observability at every layer of the stack; ensuring trust, safety and reliable outcomes. As the control plane to observe, govern and secure all AI artifacts, Agent 365 provides a unified view of every AI agent running in an organization’s environment — whether built on Microsoft’s platforms or others.
Our customers and partners are showcasing what can be achieved with Frontier Transformation and human ambition paired with Copilots + agents, and I am pleased to share their stories — including many onstage with us at Ignite. Their journeys demonstrate what is possible for organizations everywhere when AI-first innovation is built upon Intelligence + Trust.
Putting AI in the flow of human ambition so people can achieve more in every role, across every industry
Using a secure Azure foundation, Epic embedded AI directly into clinical workflows, enabling hundreds of thousands of clinicians worldwide to work faster and deliver higher quality care. Epic AI generates documentation in the flow of work, reducing time spent on prior authorization questions by over 40% and surfacing critical insights that could be missed during manual review. In one month alone, Epic AI automatically generated more than 16 million patient record summaries, helping clinicians reduce administrative workload and speed time to treatment. AI-driven imaging follow-up also boosted early cancer detection at the Christ Hospital to 69%, far above the national 46% average. By delivering real improvements like these today, Epic is building confidence and familiarity that will accelerate adoption of tomorrow’s AI-enabled breakthroughs in precision medicine, drug discovery and the understanding of disease.
To create a consistent experience across its entire workforce, heritage brand Levi Strauss & Co. standardized on Windows 11, Copilot+ PCs, Intune, Microsoft 365 Copilot and Microsoft Foundry to give every team — from designers to retail associates to distribution centers — a modern, AI-powered workplace. With Copilot and agents accelerating workflows and eliminating fragmentation across legacy systems, teams can model demand faster, bring products to market with greater precision and spend more time on creative and commercial work that strengthens the brand. They are also reducing operational noise, strengthening security and scaling insights across design, merchandising, retail and supply chain. With a unified, secure Microsoft platform, Levi’s is enriching the employee experience, driving sharper execution and building durable advantage in an increasingly dynamic market.
London Stock Exchange Group (LSEG) is unifying the data foundation of global finance by modernizing its platform on Microsoft Fabric and bringing trusted financial intelligence directly into Microsoft 365 Copilot. The company has consolidated 30 legacy data systems, 1,200 datasets and more than 33 petabytes of financial content into a single, governed environment. This unified foundation is now delivering faster, cleaner insights to 44,000 customers in over 170 countries and cutting product development timelines from years to months. With Fabric and Copilot working together, financial professionals can access LSEG’s expansive data and analytics directly in the flow of work — helping them make decisions with greater speed and confidence while reducing friction across risk modeling, regulatory compliance and investment workflows. By simplifying the data estate first, LSEG is safely surfacing insights through Microsoft 365 Copilot and empowering teams across the organization to innovate with consistency, compliance and at global scale.
The University of Manchester is the first higher education institution in the world to provide Microsoft 365 Copilot access and training to all 65,000 students and staff. Learners and researchers will gain equitable access to Copilot-powered tools to strengthen teaching, accelerate interdisciplinary discovery and build future-ready skills. For students, this is an essential aid for revision, translation and academic success; while university leadership can ensure responsible use policies and training so every student can use AI ethically and confidently. Researchers can synthesize vast volumes of information across fields from photonic materials to biomedical science, enabling faster progress on challenges from cancer treatment to sustainable manufacturing; while operationally, Copilot helps administrative staff free their time for higher value work. The University of Manchester is defining a new model for modern higher education by pairing its decades of AI innovation with equitable access to cutting edge AI tools that prepare the next generation of citizens, innovators and creators.
Inspiring the maker in every one of us with ubiquitous innovation that amplifies creativity and accelerates impact
Adobe is redefining creativity, productivity and customer experience by infusing AI deeply into its product ecosystem, powered by Azure, Copilot and Microsoft Foundry. By supporting third-party models directly inside Adobe Firefly, creators can choose the best model for the job while unlocking new agentic capabilities across Photoshop, Acrobat and Adobe’s Customer Experience Orchestration solutions; resulting in significant acceleration in workflows through AI-driven agents. With daily use of GitHub Copilot, its engineering organization is boosting developer productivity and speed to innovation. The company is also focused on enterprise-grade governance and data provenance to help customers trust and verify content as AI adoption grows — further reinforced by Adobe Marketing Agent for Microsoft 365 Copilot as part of the Agent 365 preview. By combining open model choice with responsible AI infrastructure, Adobe is giving customers creative choice and operational confidence, while unlocking faster innovation, without compromising security, trust or brand integrity.
In an industry facing unprecedented pressure — from rising costs to shrinking margins — Land O’Lakes is accelerating AI innovation across American agriculture by developing a new digital assistant called Oz. Built on models within Microsoft Foundry, the digital assistant turns an 800-page crop protection guide into instant, data-rich insights delivered through intuitive workflows. The Copilot solution provides agronomic expertise throughout the growing season tailored to each grower’s soil, crop and environmental conditions — with personalized recommendations that address unique farm-level challenges. This AI-enhanced solution streamlines access to critical information so experts can help growers make faster, more confident decisions that help control input costs, boost crop yields and drive long-term success. The work Land O’Lakes is doing shows how AI can democratize intelligence — empowering farmers to grow their businesses while feeding their communities and building a more resilient agricultural ecosystem.
Mercedes Benz is transforming every layer of its enterprise — from headquarters to the factory floor to the driving experience. Built on Azure and powered by Copilot, the company is moving toward making Copilot available broadly, with over 50 business areas already using Copilot Studio to build and automate their own workflows and agents. GitHub Copilot has driven a 70% increase in engagement with software developers, shifting teams from routine coding to higher value innovation. On the factory floor, Mercedes’ MO360 data platform connects 30 passenger plants, and its Digital Factory Chat multiagent system is cutting issue diagnosis time from days to minutes. With the next generation of Hey Mercedes — powered by Azure OpenAI, Bing, Microsoft Teams, Intune and Microsoft 365 Copilot — the vehicle becomes a “third workspace,” enabling productivity through natural voice. Investing in AI skills, tools and platform breadth is helping Mercedes Benz build enterprise capability and bend the curve on innovation; with efficiency gains that help teams innovate and drive operational impact internally and across customer experiences. We also recently announced our partnership with the Mercedes-AMG PETRONAS F1 Team to drive innovation across its racing operations. With Microsoft’s cloud and enterprise AI stack — including Azure AI, Microsoft 365 and GitHub — it is turning data into real-time intelligence that powers faster decisions, smarter strategies and sustained competitive advantage on and off the track.
Pantone is transforming decades of color expertise into a next-generation AI offering with the launch of its Pantone Palette Generator, built entirely on Microsoft Foundry and Azure AI. By applying a multi-agent architecture powered by Azure AI Search, Azure Cosmos DB and Azure OpenAI, Pantone is bringing instant, trend-backed color guidance directly into creative workflows. What once required weeks of research across physical color books and expert archives can now be achieved in seconds, enabling designers, brands and product teams to move from inspiration to production with greater speed and accuracy. Using GitHub Copilot, its engineering team accelerated development of initial proof of concept by more than 200 hours, allowing the company to focus on enhancing agent orchestration and color science logic. As Pantone expands its AI-native platform, it is also helping creators build new skills — learning how to integrate agentic workflows, prompt engineering and trend-driven insights into the design process. The platform modernizes Pantone’s iconic color system and positions the company to scale new digital services as it evolves its multiagent capabilities and reshapes business processes.
Westpac is bringing Copilot to more than 35,000 employees across its global workforce — the largest Microsoft 365 Copilot rollout to date in Australia and the largest deployment in financial services within Asia Pacific. This comes after a successful pilot with 15,000 employees that delivered strong business outcomes and freed up significant time for users each month. The company is now deploying AI to accelerate work, reduce friction and reinvent how employees engage with their customers. The bank is pairing its Copilot implementation with AI education programs and Microsoft Copilot Studio to build custom agents for HR and IT, while creating a new Azure-based innovation sandbox to enable teams to quickly experiment with AI-enabled workflows and solutions. Westpac’s move to embed AI at scale is a strategic investment in people and a catalyst for more efficient, higher value work—underscoring how responsible, enterprise grade AI can drive meaningful value for employees, customers and shareholders.
Bringing observability to every layer of the stack to ensure outcomes are reliable, safe and aligned with the business
ServiceNow is helping its customers accelerate AI adoption safely by integrating with Microsoft Agent 365 — Microsoft’s control plane for securing and governing agents at scale. By enabling them to bring their agentic workflows into a unified governance environment, ServiceNow can help them gain visibility, access controls and ensure compliance across AI systems. Companies like AstraZeneca are already using ServiceNow AI Control Tower together with Agent 365 to manage lab and operational workflows, saving 90,000 hours that researchers can redirect toward discovering lifesaving drugs. The ability to see, trust and scale AI agents gives organizations confidence to move quickly without losing control. ServiceNow is demonstrating how advanced workflow AI delivers its greatest value when paired with enterprise-wide governance — giving organizations the speed and efficiency they want while maintaining the control required for mission critical operations.
To help organizations safely accelerate agentic AI adoption, Workday is building solutions that work with Agent 365 for a unified way to govern its agents. The company is helping businesses address the shift from shadow IT to shadow AI as employees begin incorporating AI agents into the way they work. Workday’s Agent System of Record helps customers establish governance and oversight around the work AI agents are doing, allowing them to scale intelligent workflows with confidence. Workday highlights that responsible AI acceleration requires combining powerful automation with a shared control plan — making the secure, compliant path the easiest one, and enabling organizations to scale without losing oversight.
As organizations move to operationalize agentic AI at scale, Genspark is integrating with Agent 365 to provide a governed, enterprise grade path for deploying its rapidly growing ecosystem of Super Agents. As employees increasingly experiment with personal AI creation tools, companies are looking for ways to shift from unmanaged shadow AI to secure, outcome-driven agent workflows. Through Agent 365, the platform enables organizations to register its agents alongside those from Microsoft and other partners, apply unified governance policies, maintain consistent identity and permission controls, and ensure all agent generated outputs align with corporate, regulatory and data residency requirements. This governance layer extends the value of its own agent registry — where more than 80 specialized agents and millions of user generated prompts are already driving productivity — allowing customers to safely scale agentic creation across roles, teams and industries. Genspark demonstrates that responsible acceleration requires pairing powerful, outcome first agent experiences with a shared control plane like Agent 365, enabling AI to scale without losing oversight.
At Ignite, we also introduced Agent Factory — a new way for organizations to build and scale AI agents with confidence — bringing together Work IQ, Fabric IQ and Foundry IQ under a single, ROI-driven model. Agent Factory enables companies to take complex workflows — from claims processing to freight forwarding to supply chain management — and turn them into measurable, production-ready agentic systems supported by Microsoft’s forward deployed engineers, partner ecosystem and built-in governance. As agents move from experimentation to mission critical automation, companies need a standardized, governed path to build and scale them, and Agent Factory is the solution — tying innovation directly to measurable ROI.
Our ambition with Frontier Transformation is to ensure that the maker in every one of us is empowered by everything we build and deliver. As we enter the second half of the fiscal year, one thing is clear: our customers and partners are redefining what can be achieved as Frontier Firms. Built on a foundation of Intelligence + Trust, and with the full breadth of Microsoft’s cloud and AI solutions, we are committed to helping every organization scale AI-first innovation. Our model diverse, open and heterogenous platform unifies your IQ assets — and the human ambition that lives inside of your company — to deliver outcomes that help you achieve your highest aspirations. Thank you for your continued partnership and trust as we continue shaping what is possible together.
The post How Microsoft is empowering Frontier Transformation with Intelligence + Trust appeared first on The Official Microsoft Blog.
At Microsoft Ignite in November, we introduced Frontier Transformation — a holistic reimagining of business aligning AI with human ambition to help organizations achieve their highest aspirations and growth potential. While AI Transformation centered on efficiency and productivity, Frontier Transformation challenges us to do more for humanity by democratizing intelligence to unlock creativity and innovation for…
The post How Microsoft is empowering Frontier Transformation with Intelligence + Trust appeared first on The Official Microsoft Blog.Read More
Maia 200: The AI accelerator built for inference
Today, we’re proud to introduce Maia 200, a breakthrough inference accelerator engineered to dramatically improve the economics of AI token generation. Maia 200 is an AI inference powerhouse: an accelerator built on TSMC’s 3nm process with native FP8/FP4 tensor cores, a redesigned memory system with 216GB HBM3e at 7 TB/s and 272MB of on-chip SRAM, plus data movement engines that keep massive models fed, fast and highly utilized. This makes Maia 200 the most performant, first-party silicon from any hyperscaler, with three times the FP4 performance of the third generation Amazon Trainium, and FP8 performance above Google’s seventh generation TPU. Maia 200 is also the most efficient inference system Microsoft has ever deployed, with 30% better performance per dollar than the latest generation hardware in our fleet today.
Maia 200 is part of our heterogenous AI infrastructure and will serve multiple models, including the latest GPT-5.2 models from OpenAI, bringing performance per dollar advantage to Microsoft Foundry and Microsoft 365 Copilot. The Microsoft Superintelligence team will use Maia 200 for synthetic data generation and reinforcement learning to improve next-generation in-house models. For synthetic data pipeline use cases, Maia 200’s unique design helps accelerate the rate at which high-quality, domain-specific data can be generated and filtered, feeding downstream training with fresher, more targeted signals.
Maia 200 is deployed in our US Central datacenter region near Des Moines, Iowa, with the US West 3 datacenter region near Phoenix, Arizona, coming next and future regions to follow. Maia 200 integrates seamlessly with Azure, and we are previewing the Maia SDK with a complete set of tools to build and optimize models for Maia 200. It includes a full set of capabilities, including PyTorch integration, a Triton compiler and optimized kernel library, and access to Maia’s low-level programming language. This gives developers fine-grained control when needed while enabling easy model porting across heterogeneous hardware accelerators.
Engineered for AI inference
Fabricated on TSMC’s cutting-edge 3-nanometer process, each Maia 200 chip contains over 140 billion transistors and is tailored for large-scale AI workloads while also delivering efficient performance per dollar. On both fronts, Maia 200 is built to excel. It is designed for the latest models using low-precision compute, with each Maia 200 chip delivering over 10 petaFLOPS in 4-bit precision (FP4) and over 5 petaFLOPS of 8-bit (FP8) performance, all within a 750W SoC TDP envelope. In practical terms, Maia 200 can effortlessly run today’s largest models, with plenty of headroom for even bigger models in the future.
Crucially, FLOPS aren’t the only ingredient for faster AI. Feeding data is equally important. Maia 200 attacks this bottleneck with a redesigned memory subsystem. The Maia 200 memory subsystem is centered on narrow-precision datatypes, a specialized DMA engine, on-die SRAM and a specialized NoC fabric for high‑bandwidth data movement, increasing token throughput.
Optimized AI systems
At the systems level, Maia 200 introduces a novel, two-tier scale-up network design built on standard Ethernet. A custom transport layer and tightly integrated NIC unlocks performance, strong reliability and significant cost advantages without relying on proprietary fabrics.
Each accelerator exposes:
- 2.8 TB/s of bidirectional, dedicated scaleup bandwidth
- Predictable, high-performance collective operations across clusters of up to 6,144 accelerators
This architecture delivers scalable performance for dense inference clusters while reducing power usage and overall TCO across Azure’s global fleet.
Within each tray, four Maia accelerators are fully connected with direct, non‑switched links, keeping high‑bandwidth communication local for optimal inference efficiency. The same communication protocols are used for intra-rack and inter-rack networking using the Maia AI transport protocol, enabling seamless scaling across nodes, racks and clusters of accelerators with minimal network hops. This unified fabric simplifies programming, improves workload flexibility and reduces stranded capacity while maintaining consistent performance and cost efficiency at cloud scale.
A cloud-native development approach
A core principle of Microsoft’s silicon development programs is to validate as much of the end-to-end system as possible ahead of final silicon availability.
A sophisticated pre-silicon environment guided the Maia 200 architecture from its earliest stages, modeling the computation and communication patterns of LLMs with high fidelity. This early co-development environment enabled us to optimize silicon, networking and system software as a unified whole, long before first silicon.
We also designed Maia 200 for fast, seamless availability in the datacenter from the beginning, building out early validation of some of the most complex system elements, including the backend network and our second-generation, closed loop, liquid cooling Heat Exchanger Unit. Native integration with the Azure control plane delivers security, telemetry, diagnostics and management capabilities at both the chip and rack levels, maximizing reliability and uptime for production-critical AI workloads.
As a result of these investments, AI models were running on Maia 200 silicon within days of first packaged part arrival. Time from first silicon to first datacenter rack deployment was reduced to less than half that of comparable AI infrastructure programs. And this end-to-end approach, from chip to software to datacenter, translates directly into higher utilization, faster time to production and sustained improvements in performance per dollar and per watt at cloud scale.
Sign up for the Maia SDK preview
The era of large-scale AI is just beginning, and infrastructure will define what’s possible. Our Maia AI accelerator program is designed to be multi-generational. As we deploy Maia 200 across our global infrastructure, we are already designing for future generations and expect each generation will continually set new benchmarks for what’s possible and deliver ever better performance and efficiency for the most important AI workloads.
Today, we’re inviting developers, AI startups and academics to begin exploring early model and workload optimization with the new Maia 200 software development kit (SDK). The SDK includes a Triton Compiler, support for PyTorch, low-level programming in NPL and a Maia simulator and cost calculator to optimize for efficiencies earlier in the code lifecycle. Sign up for the preview here.
Get more photos, video and resources on our Maia 200 site and read more details.
Scott Guthrie is responsible for hyperscale cloud computing solutions and services including Azure, Microsoft’s cloud computing platform, generative AI solutions, data platforms and information and cybersecurity. These platforms and services help organizations worldwide solve urgent challenges and drive long-term transformation.
The post Maia 200: The AI accelerator built for inference appeared first on The Official Microsoft Blog.
Today, we’re proud to introduce Maia 200, a breakthrough inference accelerator engineered to dramatically improve the economics of AI token generation. Maia 200 is an AI inference powerhouse: an accelerator built on TSMC’s 3nm process with native FP8/FP4 tensor cores, a redesigned memory system with 216GB HBM3e at 7 TB/s and 272MB of on-chip SRAM, plus…
The post Maia 200: The AI accelerator built for inference appeared first on The Official Microsoft Blog.
Read More
Announcing Open to Work: How to Get Ahead in the Age of AI
The work we do, and the way we do it, is always changing. Each of us has a memory of how we once did a task regularly, the tools we used and how both the task and the tools have since changed so much they are nearly unrecognizable. Because we are living and working in the “now,” change feels both personal and fast, so it is always worth remembering that this has happened before, maybe not in just this way or with this speed.
And it is true. AI is rewriting work. How we do our jobs. How roles change. How careers are built. The skills we need. Some of that is exciting. Some of it can feel overwhelming. What we remember and have learned from previous times is that in moments like this, people are open to work and don’t just need new tools. They need a new mindset, a clearer understanding of what’s changing and a path forward.
That’s why today we’re announcing Open to Work: How to Get Ahead in the Age of AI, LinkedIn’s first book, by CEO Ryan Roslansky and Chief Economic Opportunity Officer Aneesh Raman. The book explores how AI is reshaping work and what that shift means for the people navigating it every day.
Microsoft and LinkedIn sit at the intersection of how work is done and how careers are built. We share a belief that the future of work will be driven by human creativity and ingenuity, not technology alone. When humans stay at the center, AI amplifies what people do best and creates new economic opportunity. Open to Work is grounded in that belief and focused on what’s happening now, not abstract predictions about the future.
Ryan’s leadership at LinkedIn and as head of engineering for Microsoft 365 Copilot gives him a rare perspective on this moment. He sees how AI is built, how it shows up in everyday work and what it takes to adapt. Aneesh’s role gives him unique insight into how together we can use this moment of change to create economic opportunity for every member of the global workforce.
The book is backed by real data — insights from experts, LinkedIn’s global network, Microsoft customers and the Work Trend Index. The goal isn’t hype. It’s clarity about how work is changing and how people can respond in practical, meaningful ways.
For professionals, Open to Work is about agency — what you delegate to AI, what skills to deepen and how you stay relevant as roles evolve. For leaders, it’s about rethinking how work gets organized and cultivating a Frontier mindset: the conviction that the most important innovations happen at the edges, where uncertainty is highest and the opportunity to shape what comes next is greatest. And for Microsoft and LinkedIn employees, it’s a reminder of the responsibility we share to shape the future of work in a thoughtful, human-centered way.
Open to Work publishes March 31 and is available for pre-order today.
Frank X. Shaw is responsible for defining and managing communications strategies worldwide, company-wide storytelling, product PR, media and analyst relations, executive communications, employee communications, global agency management and military affairs.
Top image: Aneesh Raman, left, LinkedIn chief economic opportunity officer, and Ryan Roslansky, LinkedIn CEO. Photo provided by LinkedIn.
The post Announcing Open to Work: How to Get Ahead in the Age of AI appeared first on The Official Microsoft Blog.
The work we do, and the way we do it, is always changing. Each of us has a memory of how we once did a task regularly, the tools we used and how both the task and the tools have since changed so much they are nearly unrecognizable. Because we are living and working in…
The post Announcing Open to Work: How to Get Ahead in the Age of AI appeared first on The Official Microsoft Blog.Read More
Microsoft announces acquisition of Osmos to accelerate autonomous data engineering in Fabric
Today, Microsoft is announcing the acquisition of Osmos, an agentic AI data engineering platform designed to help simplify complex and time-consuming data workflows.
Microsoft + Osmos: Extending Microsoft Fabric with agentic AI for data engineering
Organizations today face a common challenge: data is everywhere, but making it actionable is often manual, slow and expensive. Many teams spend most of their time preparing data instead of analyzing it. Osmos solves this problem by applying agentic AI to turn raw data into analytics and AI-ready assets in OneLake, the unified data lake at the core of Microsoft Fabric.
This acquisition builds on Microsoft Fabric’s goal to enable customers to unify all data and analytics into a single, secure platform. With the acquisition of Osmos, we are taking the next step toward a future where autonomous AI agents work alongside people — helping reduce operational overhead and making it easier for customers to connect, prepare, analyze and share data across the organization.
Looking ahead: Empowering customers to unlock value from data
Today’s announcement reinforces Microsoft’s focus to help every organization unlock more value from their data faster and with greater simplicity. The Osmos team will join Microsoft’s Fabric engineering organization to advance our vision for simpler, more intuitive and AI-ready data experiences.
Stay tuned for updates as we integrate Osmos into Fabric and continue our journey to empower every organization to achieve more with data. To follow updates, visit the Microsoft Fabric Blog.
Bogdan Crivat leads Microsoft’s Azure Data Analytics, building the Fabric engines for big data behind Power BI, and our AI-powered analytics infrastructure.
The post Microsoft announces acquisition of Osmos to accelerate autonomous data engineering in Fabric appeared first on The Official Microsoft Blog.
Today, Microsoft is announcing the acquisition of Osmos, an agentic AI data engineering platform designed to help simplify complex and time-consuming data workflows. Microsoft + Osmos: Extending Microsoft Fabric with agentic AI for data engineering Organizations today face a common challenge: data is everywhere, but making it actionable is often manual, slow and expensive. Many…
The post Microsoft announces acquisition of Osmos to accelerate autonomous data engineering in Fabric appeared first on The Official Microsoft Blog.Read More
From idea to deployment: The complete lifecycle of AI on display at Ignite 2025
By now, most people would agree that AI is in the process of fundamentally changing how we work and solve problems. But this technology is still too often thought of as an addition to the work we do, rather than a fundamental part of it.
AI is not something that you can just plop on the end of a finished product, like a cherry on top of a sundae. Instead, using AI responsibly and wisely means thinking through how it can be used most effectively at every layer, from the datacenter that powers AI functionality to the people and organizations that are benefiting from its capabilities.
As we embark on another Microsoft Ignite, our company is empowering the complete lifecycle of AI, creating tools and solutions to drive the next generation of digital transformation for every organization and at every level of the work they do.
We envision a future where organizations become Frontier Firms by using AI for unlocking creativity and innovation, allowing the next great ideas to surface.
These are some of the major themes we are seeing with this year’s Ignite products and features:
AI in the flow of human ambition
At Microsoft, we believe that all great ideas start with human ambition, which can be accessed and unlocked using the capabilities in Microsoft 365 Copilot and an agent ecosystem.
Work IQ amplifies your IQ. It’s the intelligence layer that enables Microsoft 365 Copilot and agents to know how you work, with whom you work and the content you collaborate on. Built on your data, memory and inference, it connects to the rich company knowledge in your emails, files, meetings and chats, plus your preferences, habits, work patterns and relationships. It allows Copilot to make connections, unlock insights and predict the next best action based on native integrations, not a patchwork of third-party connectors. And now, you can tap into the expertise of Work IQ with APIs to build agents tuned to your unique workflows and business needs.
Work IQ also is powering many of the updates across Microsoft 365 Copilot announced at Ignite today.
Ubiquitous innovation and intelligence
In a Frontier Firm, there are makers in every room of the house. People on the frontlines are closest to the work problems that need to be solved. They can create agents to help them in their day-to-day work.
How do AI agents know what to do with your data? Foundry IQ and Fabric IQ help AI agents understand what users are doing, bridge the gap between raw data and real-world business meaning and find the context to make decisions.
Fabric IQ brings together analytical, time series and location-based data with your operational systems under one shared model tied to business meaning. This gives you a live, connected view of your business, so both people and AI can act in real time. If you are a customer who is already using Power BI for your business intelligence reporting, all of that pre-existing data modeling work will act as an immediate accelerant, giving your agents the unique context that defines how your business runs.
Foundry IQ takes this further with a fully managed knowledge system designed to ground AI agents over multiple data sources — including Microsoft 365 (Work IQ), Fabric IQ, custom applications and the web. This single endpoint for knowledge has routing and intelligence built in, enabling higher-quality reasoning, safer actions and more value for builders.
Microsoft Agent Factory is a program that brings these agent IQ layers together to help organizations build agents with confidence. With a single metered plan, customers can start building with IQ using Microsoft Foundry and Copilot Studio. They can deploy their agents anywhere, including Microsoft 365 Copilot, with no upfront licensing and provisioning required. Eligible organizations can also tap into hands-on support from top AI Forward Deployed Engineers and access tailored role-based training to boost AI fluency across teams.
Observability at every layer
By 2028, businesses are projected to have[1] 1.3 billion AI agents automating workflows. Most organizations don’t yet have a way to observe, secure or govern them — if not governed, AI agents are the new shadow IT.
Microsoft Agent 365 enables you to observe, manage and secure your AI agents, whether the agents are created with Microsoft platforms, open-source frameworks or third-party platforms.
It equips them with many of the same apps and protections as people, tailored to agent needs, saving IT time and effort on integrating agents into business processes. It includes the Microsoft security solutions Defender, Entra, Purview and Foundry Control Plane to protect and govern agents, productivity tools including Microsoft 365 apps and Work IQ to help people work more efficiently and Microsoft 365 admin center to manage agents.
This is only a small selection of the many exciting features and updates we will be announcing at Ignite. As a reminder, you can view keynote sessions from Microsoft executives, including Judson Althoff, Scott Guthrie, Charles Lamanna, Asha Sharma and Ryan Roslansky, live or on-demand.
Plus, you can get more on all these announcements by exploring the Book of News, the official compendium of all today’s news.
Frank X. Shaw is responsible for defining and managing communications strategies worldwide, company-wide storytelling, product PR, media and analyst relations, executive communications, employee communications, global agency management and military affairs.
Related:
Partners leading the AI transformation: Microsoft Ignite 2025 recap
[1] IDC Info Snapshot, sponsored by Microsoft, 1.3 Billion AI Agents by 2028, May 2025 #US53361825
The post From idea to deployment: The complete lifecycle of AI on display at Ignite 2025 appeared first on The Official Microsoft Blog.
By now, most people would agree that AI is in the process of fundamentally changing how we work and solve problems. But this technology is still too often thought of as an addition to the work we do, rather than a fundamental part of it. AI is not something that you can just plop on…
The post From idea to deployment: The complete lifecycle of AI on display at Ignite 2025 appeared first on The Official Microsoft Blog.Read More
Microsoft, NVIDIA and Anthropic announce strategic partnerships
Anthropic to scale Claude on Azure
Anthropic to adopt NVIDIA architecture
NVIDIA and Microsoft to invest in Anthropic
Today Microsoft, NVIDIA and Anthropic announced new strategic partnerships. Anthropic is scaling its rapidly-growing Claude AI model on Microsoft Azure, powered by NVIDIA, which will broaden access to Claude and provide Azure enterprise customers with expanded model choice and new capabilities. Anthropic has committed to purchase $30 billion of Azure compute capacity and to contract additional compute capacity up to one gigawatt.
For the first time, NVIDIA and Anthropic are establishing a deep technology partnership to support Anthropic’s future growth. Anthropic and NVIDIA will collaborate on design and engineering, with the goal of optimizing Anthropic models for the best possible performance, efficiency, and TCO, and optimizing future NVIDIA architectures for Anthropic workloads. Anthropic’s compute commitment will initially be up to one gigawatt of compute capacity with NVIDIA Grace Blackwell and Vera Rubin systems.
Microsoft and Anthropic are also expanding their existing partnership to provide broader access to Claude for businesses. Customers of Microsoft Foundry will be able to access Anthropic’s frontier Claude models including Claude Sonnet 4.5, Claude Opus 4.1, and Claude Haiku 4.5. This partnership will make Claude the only frontier model available on all three of the world’s most prominent cloud services. Azure customers will gain expanded choice in models and access to Claude-specific capabilities.
Microsoft has also committed to continuing access for Claude across Microsoft’s Copilot family, including GitHub Copilot, Microsoft 365 Copilot, and Copilot Studio.
As part of the partnership, NVIDIA and Microsoft are committing to invest up to $10 billion and up to $5 billion respectively in Anthropic.
Anthropic co-founder and CEO Dario Amodei, Microsoft Chairman and CEO Satya Nadella, and NVIDIA founder and CEO Jensen Huang gathered to discuss the new partnerships:
The post Microsoft, NVIDIA and Anthropic announce strategic partnerships appeared first on The Official Microsoft Blog.
Anthropic to scale Claude on Azure Anthropic to adopt NVIDIA architecture NVIDIA and Microsoft to invest in Anthropic Today Microsoft, NVIDIA and Anthropic announced new strategic partnerships. Anthropic is scaling its rapidly-growing Claude AI model on Microsoft Azure, powered by NVIDIA, which will broaden access to Claude and provide Azure enterprise customers with expanded model choice and new capabilities. Anthropic…
The post Microsoft, NVIDIA and Anthropic announce strategic partnerships appeared first on The Official Microsoft Blog.Read More
Infinite scale: The architecture behind the Azure AI superfactory
Today, we are unveiling the next Fairwater site of Azure AI datacenters in Atlanta, Georgia. This purpose-built datacenter is connected to our first Fairwater site in Wisconsin, prior generations of AI supercomputers and the broader Azure global datacenter footprint to create the world’s first planet-scale AI superfactory. By packing computing power more densely than ever before, each Fairwater site is built to efficiently meet unprecedented demand for AI compute, push the frontiers of model intelligence and empower every person and organization on the planet to achieve more.
To meet this demand, we have reinvented how we design AI datacenters and the systems we run inside of them. Fairwater is a departure from the traditional cloud datacenter model and uses a single flat network that can integrate hundreds of thousands of the latest NVIDIA GB200 and GB300 GPUs into a massive supercomputer. These innovations are a product of decades of experience designing datacenters and networks, as well as learnings from supporting some of the largest AI training jobs on the planet.
While the Fairwater datacenter design is well suited for training the next generation of frontier models, it is also built with fungibility in mind. Training has evolved from a single monolithic job into a range of workloads with different requirements (such as pre-training, fine-tuning, reinforcement learning and synthetic data generation). Microsoft has deployed a dedicated AI WAN backbone to integrate each Fairwater site into a broader elastic system that enables dynamic allocation of diverse AI workloads and maximizes GPU utilization of the combined system.
Below, we walk through some of the exciting technical innovations that support Fairwater, from the way we build datacenters to the networking within and across the sites.
Maximum density of compute
Modern AI infrastructure is increasingly constrained by the laws of physics. The speed of light is now a key bottleneck in our ability to tightly integrate accelerators, compute and storage with performant latency. Fairwater is designed to maximize the density of compute to minimize latency within and across racks and maximize system performance.
One of the key levers for driving density is improving cooling at scale. AI servers in the Fairwater datacenters are connected to a facility-wide cooling system designed for longevity, with a closed-loop approach that reuses the liquid continuously after the initial fill with no evaporation. The water used in the initial fill is equivalent to what 20 homes consume in a year and is only replaced if water chemistry indicates it is needed (it is designed for 6-plus years), making it extremely efficient and sustainable.
Liquid-based cooling also provides much higher heat transfer, enabling us to maximize rack and row-level power (~140kW per rack, 1,360 kW per row) to pack compute as densely as possible inside the datacenter. State-of-the-art cooling also helps us maximize utilization of this dense compute in steady-state operations, enabling large training jobs to run performantly at high scale. After cycling through a system of cold plate paths across the GPU fleet, heat is dissipated by one of the largest chiller plants on the planet.

Another way we are driving compute density is with a two-story datacenter building design. Many AI workloads are very sensitive to latency, which means cable run lengths can meaningfully impact cluster performance. Every GPU in Fairwater is connected to every other GPU, so the two-story datacenter building approach allows for placement of racks in three dimensions to minimize cable lengths, which in turn improves latency, bandwidth, reliability and cost.

High-availability, low-cost power
We are pushing the envelope in serving this compute with cost-efficient, reliable power. The Atlanta site was selected with resilient utility power in mind and is capable of achieving 4×9 availability at 3×9 cost. By securing highly available grid power, we can also forgo traditional resiliency approaches for the GPU fleet (such as on-site generation, UPS systems and dual-corded distribution), driving cost savings for customers and faster time-to-market for Microsoft.
We have also worked with our industry partners to codevelop power-management solutions to mitigate power oscillations created by large scale jobs, a growing challenge in maintaining grid stability as AI demand scales. This includes a software-driven solution that introduces supplementary workloads during periods of reduced activity, a hardware-driven solution where the GPUs enforce their own power thresholds and an on-site energy storage solution to further mask power fluctuations without utilizing excess power.
Cutting-edge accelerators and networking systems
Fairwater’s world-class datacenter design is powered by purpose-built servers, cutting-edge AI accelerators and novel networking systems. Each Fairwater datacenter runs a single, coherent cluster of interconnected NVIDIA Blackwell GPUs, with an advanced network architecture that can scale reliably beyond traditional Clos network limits with current-gen switches (hundreds of thousands of GPUs on a single flat network). This required innovation across scale-up networking, scale-out networking and networking protocol.
In terms of scale-up, each rack of AI accelerators houses up to 72 NVIDIA Blackwell GPUs, connected via NVLink for ultra-low-latency communication within the rack. Blackwell accelerators provide the highest compute density available today, with support for low-precision number formats like FP4 to increase total FLOPS and enable efficient memory use. Each rack provides 1.8 TB of GPU-to-GPU bandwidth, with over 14 TB of pooled memory available to each GPU.

These racks then use scale-out networking to create pods and clusters that enable all GPUs to function as a single supercomputer with minimal hop counts. We achieve this with a two-tier, ethernet-based backend network that supports massive cluster sizes with 800 Gbps GPU-to-GPU connectivity. Relying on a broad ethernet ecosystem and SONiC (Software for Open Network in the Cloud – which is our own operating system for our network switches) also helps us avoid vendor lock-in and manage cost, as we can use commodity hardware instead of proprietary solutions.
Improvements across packet trimming, packet spray and high-frequency telemetry are core components of our optimized AI network. We are also working to enable deeper control and optimization of network routes. Together, these technologies deliver advanced congestion control, rapid detection and retransmission and agile load balancing, ensuring ultra-reliable, low-latency performance for modern AI workloads.
Planet scale
Even with these innovations, compute demands for large training jobs (now measured in trillions of parameters) are quickly outpacing the power and space constraints of a single facility. To serve these needs, we have built a dedicated AI WAN optical network to extend Fairwater’s scale-up and scale-out networks. Leveraging our scale and decades of hyperscale expertise, we delivered over 120,000 new fiber miles across the US last year — expanding AI network reach and reliability nationwide.
With this high-performance, high-resiliency backbone, we can directly connect different generations of supercomputers into an AI superfactory that exceeds the capabilities of a single site across geographically diverse locations. This empowers AI developers to tap our broader network of Azure AI datacenters, segmenting traffic based on their needs across scale-up and scale-out networks within a site, as well as across sites via the continent spanning AI WAN.
This is a meaningful departure from the past, where all traffic had to ride the scale-out network regardless of the requirements of the workload. Not only does it provide customers with fit-for-purpose networking at a more granular level, it also helps create fungibility to maximize the flexibility and utilization of our infrastructure.
Putting it all together
The new Fairwater site in Atlanta represents the next leap in the Azure AI infrastructure and reflects our experience running the largest AI training jobs on the planet. It combines breakthrough innovations in compute density, sustainability and networking systems to efficiently serve the massive demand for computational power we are seeing. It also integrates deeply with other AI datacenters and the broader Azure platform to form the world’s first AI superfactory. Together, these innovations provide a flexible, fit-for-purpose infrastructure that can serve the full spectrum of modern AI workloads and empower every person and organization on the planet to achieve more. For our customers, this means easier integration of AI into every workflow and the ability to create innovative AI solutions that were previously unattainable.
Find out more about how Microsoft Azure can help you integrate AI to streamline and strengthen development lifecycles here.
Scott Guthrie is responsible for hyperscale cloud computing solutions and services including Azure, Microsoft’s cloud computing platform, generative AI solutions, data platforms and information and cybersecurity. These platforms and services help organizations worldwide solve urgent challenges and drive long-term transformation.
The post Infinite scale: The architecture behind the Azure AI superfactory appeared first on The Official Microsoft Blog.
Today, we are unveiling the next Fairwater site of Azure AI datacenters in Atlanta, Georgia. This purpose-built datacenter is connected to our first Fairwater site in Wisconsin, prior generations of AI supercomputers and the broader Azure global datacenter footprint to create the world’s first planet-scale AI superfactory. By packing computing power more densely than ever…
The post Infinite scale: The architecture behind the Azure AI superfactory appeared first on The Official Microsoft Blog.
Read More
Bridging the AI divide: How Frontier firms are transforming business
Across every industry, leaders are asking: How can AI be used to fundamentally transform our business? At the forefront are Frontier firms — empowering human ambition and finding AI-first differentiation in everything to maximize their potential and impact on society. These firms are redefining what’s possible and setting the pace for the future.
To better understand this transformation, Microsoft commissioned a global study with the International Data Corporation (IDC) of more than 4,000 business leaders responsible for AI decisions. The findings reveal 68% of these companies are using AI today but the real difference lies in how they’re using it. Frontier firms, the ones leading in AI Transformation, report they are achieving returns that are three times higher than slow adopters.
What sets Frontier firms apart
Their success goes beyond efficiency and productivity at scale, driving growth, expansion and industry leadership in a new AI-powered economy. Based on the IDC study, Microsoft has identified five key lessons learned in becoming a Frontier firm and how organizations can transform their business with AI.
#1 EXPANDING AI IMPACT ACROSS EVERY BUSINESS FUNCTION
On average Frontier firms are using AI across seven business functions. Over 70% are using AI in customer service, marketing, IT, product development and cybersecurity. These functions benefit from AI’s ability to automate workflows, generate content and detect anomalies in real time. This broad adoption is translating into measurable business impact: Frontier firms report better outcomes at a rate that is 4X greater than slow adopters across brand differentiation (87%), cost efficiency (86%), top-line growth (88%) and customer experience (85%).
BlackRock is transforming its investment lifecycle with Microsoft AI integrated into its Aladdin platform. Embedded across 20 apps and used by tens of thousands of users, AI tools help client relationship managers save hours per client by generating personalized briefs and opportunity analyses, while portfolio managers access real-time analytics and research summaries through Aladdin Copilot. The result is faster insights, improved data quality and enhanced risk management; helping BlackRock and its clients gain an advantage while enhancing client service, compliance and portfolio management.
#2: UNLOCKING INDUSTRY-SPECIFIC VALUE
While many organizations start their AI journey with personal productivity gains like automating tasks and improving efficiency, Frontier firms are moving further, deploying AI for strategic, industry-specific applications. According to the study, 67% are monetizing industry-specific AI use cases to boost revenue.
Industries at the forefront of this transformation include financial services, healthcare and manufacturing. Each is finding powerful, practical ways to apply AI to its most complex challenges. In financial services, organizations are strengthening fraud detection, accelerating transaction reconciliation and elevating customer support. In healthcare, it is helping clinicians generate accurate documentation, assist in diagnostics and deliver more personalized care. In manufacturing, AI is driving predictive maintenance, optimizing production schedules and automating quality inspections.
Mercedes-Benz is scaling AI across its global production network to advance automotive innovation, stabilize supply chain volatility, simplify production complexity and meet sustainability demands. Its MO360 data platform connects more than 30 car plants worldwide to the Microsoft Cloud for real-time data access, global optimization and analytics. The Digital Factory Chatbot Ecosystem uses a multi-agent system to empower employees with collaborative insights. Paint Shop AI leverages machine learning simulations to diagnose efficiency declines and reduce energy consumption of the buildings and machines — including 20% energy savings in the Rastatt paint shop — and NVIDIA Omniverse on Azure powers digital twins for agile planning and continuous improvement.
#3: BUILDING CUSTOM AI SOLUTIONS FOR COMPETITIVE ADVANTAGE
Today, 58% of Frontier firms are using custom AI solutions. Custom AI solutions allow businesses to embed proprietary knowledge, tone and compliance into every interaction. They can be fine-tuned on proprietary data or industry-specific knowledge, enabling higher accuracy in predictions or content generation and better alignment with business goals and compliance needs.
Within the next 24 months, 77% of Frontier firms plan to use custom AI solutions. This reflects a growing trend that AI leaders are layering in deeper strategic integrations of AI across their business.
As customers seek to use AI more to shop and search for products, luxury lifestyle company Ralph Lauren developed a personal, frictionless, inspirational and accessible solution to blend fashion with cutting-edge AI. Working with Microsoft, Ralph Lauren developed Ask Ralph: an AI-powered conversational tool providing styling tips and outfit recommendations from across the Polo Ralph Lauren brand. Powered by Azure OpenAI, the AI tool uses a natural language search engine to adapt dynamically to specific language inputs and interpret user intent to improve accuracy. It supports complex queries with exploratory or nuanced information needs with contextual understanding; and can discern tone, satisfaction and intent to refine recommendations. The tool also picks up on cues like location-based insights or event-driven needs. With Ask Ralph, customers can now reimagine how they shop online by putting the brand’s unique and iconic take on style right into their own hands.
#4: AGENTIC AI: THE NEW DIFFERENTIATOR FOR BUSINESS LEADERS
Agentic AI — systems that can reason, plan and act with human guidance — is fast becoming the next defining capability of Frontier organizations. In the next two years, IDC estimates the number of companies using agentic AI will triple.
Leaders today face a familiar challenge — teams are operating at full capacity, yet the demand for innovation and impact continues to grow. That’s where AI agents come in. In finance, they can surface real-time insights, provide policy guidance, review deal documents and assist in sourcing suppliers. In sales, agents are becoming always-on teammates — building pipelines, unifying insights across CRM systems, meetings, emails and the web and helping sellers qualify leads and draft personalized outreach. In customer service, AI agents can manage cases, maintain knowledge accuracy and interpret customer intent.
Dow is using agents to automate the shipping invoice analysis process and streamline its global supply chain to unlock new efficiencies and value. Receiving more than 100,000 shipping invoices via PDF each year, Dow built an autonomous agent in Copilot Studio to scan for billing inaccuracies and surface them in a dashboard for employee review. Using Freight Agent — a second agent built in Copilot Studio — employees can investigate further by “dialoguing with the data” in natural language. The agents are helping employees solve the challenge of hidden losses autonomously within minutes rather than weeks or months. Dow expects to save millions of dollars on shipping costs through increased accuracy in logistic rates and billing within the first year.
#5: AI BUDGETS ARE GROWING AND SO IS THE TEAM BEHIND THEM
71% of respondents plan to increase their AI budgets, with funding coming from IT and non-IT sources. These investments are no longer confined to the IT department or the Chief Digital Officer’s office.
To truly unlock AI’s transformational potential, it requires everyone collaborating across functions to drive innovation, adoption and impact: 34% of respondents are adding net new investment, 24% are repurposing existing IT budgets and 13% are reallocating funds from non-IT areas such as operations, HR or marketing. This diversified funding strategy signals that AI is no longer viewed as a niche technology — it’s becoming a core enabler of enterprise-wide transformation.
“IDC is projecting that the global economic impact of AI is projected to reach $22.3 trillion by 2030 (3.7% of global GDP in 2030), estimating the return on AI investments requires both strong measurement capabilities and a robust business case — one that models both cost implications and the potential for responsible value creation,” said David Schubmehl, Vice President AI and Automation for IDC.
The AI imperative: Act now to lead the future
The opportunity to demand more from AI is now. Among organizations surveyed, 22% are Frontier firms, realizing measurable impact and moving with speed, while 39% risk falling behind. Many are navigating challenges around security, privacy, governance and cost, as well as ethical considerations, integration complexity and scaling from pilot to production.
The message is clear: those who embrace AI benefit from momentum in efficiency, customer experience and innovation. To stay competitive, leaders should act now and embrace AI not as an experiment but as a strategic imperative for growth.
Closing the gap: Start your transformation today
Success starts with investment, governance and organizational readiness. Having a robust infrastructure that is secure, reliable and scalable to support AI initiatives is critical. The emergence of Frontier firms shows that customized AI deployment and responsible oversight can drive ROI and innovation.
Explore how Microsoft’s AI solutions can transform your organization. Leverage our resources to innovate with AI and start your journey to becoming a Frontier firm.
Alysa Taylor is the Chief Marketing Officer for Commercial Cloud and AI at Microsoft, leading teams that enable digital and AI transformation for organizations of all sizes across the globe. She is at the forefront of helping organizations around the world harness digital and AI innovation to transform how they operate and grow.
NOTE
IDC InfoBrief: sponsored by Microsoft, What Every Company Can Learn From Frontier firms Leading the AI Revolution, IDC # US53838325, November 2025
The post Bridging the AI divide: How Frontier firms are transforming business appeared first on The Official Microsoft Blog.
Across every industry, leaders are asking: How can AI be used to fundamentally transform our business? At the forefront are Frontier firms — empowering human ambition and finding AI-first differentiation in everything to maximize their potential and impact on society. These firms are redefining what’s possible and setting the pace for the future. To better…
The post Bridging the AI divide: How Frontier firms are transforming business appeared first on The Official Microsoft Blog.
Read More
Beware of double agents: How AI can fortify — or fracture — your cybersecurity
AI is rapidly becoming the backbone of our world, promising unprecedented productivity and innovation. But as organizations deploy AI agents to unlock new opportunities and drive growth, they also face a new breed of cybersecurity threats.
There are a lot of Star Trek fans here at Microsoft, including me. One of our engineering leaders gifted me a life-size cardboard standee of Data that lurks next to my office door. So, as I look at that cutout, I think about the Great AI Security Dilemma: Is AI going to be our best friend or our worst nightmare? Drawing inspiration from the duality of the android officer Data, and his evil twin Lore in the Star Trek universe, today’s AI agents can either fortify your cybersecurity defenses — or, if mismanaged — fracture them.
The influx of agents is real. IDC research[1] predicts there will be 1.3 billion agents in circulation by 2028. When we think about our agentic future in AI, the duality of Data and Lore seems like a great way to think about what we’ll face with AI agents and how to avoid double agents that upend control and trust. Leaders should consider three principles and tailor them to fit the specific needs of their organizations.
1. Recognize the new attack landscape
Security is not just an IT issue — it’s a board-level priority. Unlike traditional software, AI agents are even more dynamic, adaptive and likely to operate autonomously. This creates unique risks.
We must accept that AI can be abused in ways beyond what we’ve experienced with traditional software. We employ AI agents to perform well-meaning tasks, but those with broad privileges can be manipulated by bad actors to misuse their access, such as leaking sensitive data via automated actions. We call this the “Confused Deputy” problem. AI Agents “think” in terms of natural language where instructions and data are tightly intertwined, much more than in typical software we interact with. The generative models agents depend on dynamically analyze the entire soup of human (or even non-human) languages, making it hard to distinguish well-known safe operations from new instructions introduced through malicious manipulation. The risk grows even more when shadow agents — unapproved or orphaned — enter the picture. And as we saw in Bring Your Own Device (BYOD) and other tech waves, anything you cannot inventory and account for magnifies blind spots and drives risk ever upward.
2. Practice Agentic Zero Trust
AI agents may be new as productivity drivers, but they can still be managed effectively using established security principles. I’ve had great conversations about this here at Microsoft with leaders like Mustafa Suleyman, cofounder of DeepMind and now Executive Vice President and CEO of Microsoft AI. Mustafa frequently shares a way to think about this, which he outlined in his book The Coming Wave, in terms of Containment and Alignment.
Containment simply means we do not blindly trust our AI Agents, and we significantly box every aspect of what they do. For example, we cannot let any agent’s access privileges exceed its role and purpose — it’s the same security approach we take to employee accounts, software and devices, what we refer to as “least privilege.” Similarly, we contain by never implicitly trusting what an agent does or how it communicates — everything must be monitored — and when this isn’t possible, agents simply are not permitted to operate in our environment.
Alignment is all about infusing positive control of an AI agent’s intended purpose, through its prompts and the models it uses. We must only use AI agents trained to resist attempts at corruption, with standard and mission-specific safety protections built into both the model itself and the prompts used to invoke the model. AI agents must resist attempts to divert them from their approved uses. They must execute in a Containment environment that watches closely for deviation from their intended purpose. All this requires strong AI agent identity and clear accountable ownership within the organization. As part of AI governance, every agent must have an identity, and we must know who in the organization is accountable for its aligned behavior.
Containment (least privilege) and Alignment will sound familiar to enterprise security teams, because they align with some of the basic principles of Zero Trust. Agentic Zero Trust includes “assuming breach,” or never implicitly trusting anything, making humans, devices and agents verify who they are explicitly before they gain access and limiting their access to only what’s needed to perform a task. While Agentic Zero Trust ultimately includes deeper security capabilities, discussing Containment and Alignment is a good shorthand in security-in-AI strategy conversations with senior stakeholders to keep everyone grounded in managing the new risk. Agents will keep joining and adapting at work — some may become double agents. With proper controls, we can protect ourselves.
3. Foster a culture of secure innovation
Technology alone won’t solve AI security. Culture is the real superpower in managing cyber risk — and leaders have the unique ability to shape it. Start with open dialogue: make AI risks and responsible use part of everyday conversations. Keep it cross-functional: legal, compliance, HR and others should have a seat at the table. Invest in continuous education: train teams on AI security fundamentals and clarify policies to cut through noise. Finally, embrace safe experimentation: give people approved spaces to learn and innovate without creating risk.
Organizations that thrive will treat AI as a teammate, not a threat — building trust through communication, learning and continuous improvement.
The path forward: What every company should do
AI isn’t just another chapter — it’s a plot twist that changes everything. The opportunities are huge, but so are the risks. The rise of AI requires ambient security, which executives create by making cybersecurity a daily priority. This means blending robust technical measures with ongoing education and clear leadership so that security awareness influences every choice made. Organizations maintain ambient security when they:
- Make AI security a strategic priority.
- Insist on Containment and Alignment for every agent.
- Mandate identity, ownership and data governance.
- Build a culture that champions secure innovation.
And it will be important to take a set of practical steps:
- Assign every AI agent an ID and owner — just like employees need badges. This ensures traceability and control.
- Document each agent’s intent and scope.
- Monitor actions, inputs and outputs. Map data flows early to set compliance benchmarks.
- Keep agents in secure, sanctioned environments — no rogue “agent factories.”
The call to action for every business is: Review your AI governance framework now. Demand clarity, accountability and continuous improvement. The future of cybersecurity is human plus machine — lead with purpose and make AI your strongest ally.
At Microsoft, we know we have a huge role to play in empowering our customers in this new era. In May, we introduced Microsoft Entra Agent ID as a way to help customers place unique identities to agents from the moment they are created in Microsoft Copilot Studio and Azure AI Foundry. We leverage AI in Defender and Security Copilot, combined with the massive security signals we collect, to expose and defeat phishing campaigns and other attacks that cybercriminals may use as entry points to compromise AI agents. We’ve also been committed to a platform approach with AI agents, to help customers safely use both Microsoft and third-party agents on their journey, avoiding complexity and risk that come from needing to juggle excessive dashboards and management consoles.
I’m excited by several other innovations we will be sharing at Microsoft Ignite later this month, alongside customers and partners.
We may not be conversing with Data on the bridge of the USS Enterprise quite yet, but as a technologist, it’s never been more exciting than watching this stage of AI’s trajectory in our workplaces and lives. As leaders, understanding the core opportunities and risks helps create a safer world for humans and agents working together.
Notes
[1] IDC Info Snapshot, sponsored by Microsoft, 1.3 Billion AI Agents by 2028, May 2025 #US53361825
The post Beware of double agents: How AI can fortify — or fracture — your cybersecurity appeared first on The Official Microsoft Blog.
AI is rapidly becoming the backbone of our world, promising unprecedented productivity and innovation. But as organizations deploy AI agents to unlock new opportunities and drive growth, they also face a new breed of cybersecurity threats. There are a lot of Star Trek fans here at Microsoft, including me. One of our engineering leaders gifted…
The post Beware of double agents: How AI can fortify — or fracture — your cybersecurity appeared first on The Official Microsoft Blog.Read More
Becoming Frontier: How human ambition and AI-first differentiation are helping Microsoft customers go further with AI
Over the past few years, we have driven remarkable progress accelerating AI innovation together with our customers and partners. We are achieving efficiency and productivity at scale to shape industries and markets around the world. It is time to demand more of AI to solve humanity’s biggest challenges by democratizing intelligence, obsolescing the mundane and unlocking creativity. This is the notion of becoming Frontier: to empower human ambition and find AI-first differentiation in everything we do to maximize an organization’s potential and our impact on society.
Microsoft’s technology portfolio ensures our customers can go further with AI on their way to becoming Frontier firms, using our AI Transformation success framework as their guide. Our AI business solutions are dramatically changing how people gain actionable insights from data — fusing the capabilities of AI agents and Copilots while keeping humans at the center. We have the largest, most scalable, most capable cloud and AI platform in the industry for our customers to build upon their aspirations. We remain deeply focused on ensuring AI is used responsibly and securely, and embed security into everything we do to help our customers prioritize cybersecurity and guard against threats.
We are fortunate to work with thousands of customers and partners around the world — across every geography and industry. I am pleased to share some of the customer stories being showcased at our recently opened Experience Center One facility — each exemplifying the path to becoming Frontier.
Driven by a commitment to innovation, sustainability and operational excellence, ADNOC is helping meet the world’s growing energy demands safely and reliably, while accelerating decarbonization efforts. To empower its workforce, the company introduced OneTalent — a unified AI-powered platform consolidating over 16 legacy HR processes into a single, intelligent system that furthers its dedication to nurturing talent, aligning people with strategic goals and turning every member of its workforce into an AI collaborator. Partnering with Microsoft and AIQ, ADNOC applied AI across its operations to reimagine everything from seismic analysis to predictive maintenance. ENERGYai and Neuron 5 — AI-powered platforms built natively on Azure OpenAI — turn complexity into actionable insights. The platforms use predictive models to reduce downtime — by as much as 50% at one plant. They are also using autonomous agents to optimize energy use; unlocking data-driven insights that have accelerated energy workflows from months or years to just days or minutes.
Asset manager and technology provider BlackRock has been on a journey to infuse AI to level up how its organization operates across three key pillars: how they invest, how they operate and how they serve clients. To accelerate this mission, they partnered with Microsoft to transform processes across the investment management lifecycle by integrating cloud and AI technologies alongside its Aladdin platform. Embedded across 20 applications and accessed by tens of thousands of users, the Aladdin platform’s AI capabilities deliver functionally relevant tools to help redefine workflows for different types of financial service professionals. Client relationship managers are saving hours per client, reducing duplication and improving accuracy by evaluating CRM and market data to generate personalized client briefs and opportunity analyses using natural language processing — supported by verification and review methods that facilitate accuracy and compliance. Investment compliance officers are streamlining portfolio onboarding and compliance guideline coding, saving time on more straightforward tasks to focus on complex, investigative tasks. Portfolio managers can access data, analytics, research summaries, cash balances and more through AI-powered chat capabilities; enabling faster, more informed decision-making aligned with client mandates. With accelerated insights, improved data quality and enhanced risk management, BlackRock and its clients gain an advantage while enhancing client service, compliance and portfolio management.
To build on its culture of innovation and enable hyper-relevant messaging at scale, multinational advertising and media agency dentsu built a cutting-edge solution using Azure OpenAI: dentsu.Connect — a unified OS for its applications. By leveraging the power of AI across the entire campaign lifecycle, clients can build and execute campaigns while predicting marketers’ next best impact with confidence and precision. This end-to-end platform drives data connectivity and ensures seamless interoperability with clients’ technology and data stacks to maximize and drive brand relevance across content, production and media activation while aligning every action with business goals. dentsu.Connect helps minimize the gap between insights and action with speed and precision. Since launching, users have increased operational efficiency by 25%, improved business outcomes by 30% and quickened decision-making and data-driven AI insight generation by 125X.
Water management solutions and services partner Ecolab is harnessing the power of data-driven solutions to enable organizations to reduce water consumption, maximize system performance and optimize operating costs. Using Microsoft Azure and IoT services, the company built ECOLAB3D: an intelligent cloud platform that unifies diverse and dispersed IoT data to visualize and optimize water systems remotely. By providing actionable insights for real-time optimization across multiple assets and sites, Ecolab partners with global leaders such as Microsoft to collectively drive hundreds of millions in operational savings — while conserving more than 226 billion gallons of water annually; equivalent to the drinking water needs of nearly 800 million people. Delivering solutions across diverse industries, Ecolab is also a trusted partner for foodservice locations, helping balance labor costs with customer satisfaction. Its cloud-based platform Ecolab RushReady transforms data into an AI-enabled dashboard that improves daily operations by delivering actionable insights. In an Ecolab customer case study, this helped improve speed of service and sales labor per hour, resulting in increased profit of more than 10%. From data centers to dining rooms, Ecolab delivers intelligent, scalable solutions that transform operations for greater efficiency and measurable impact.
Leveraging Microsoft’s AI solutions across its portfolio, Epic built agentic “personas” to support care teams and patients, improve operations and financial performance and advance the practice of medicine. By summarizing patient records and automatically drafting clinical notes, one organization found that “Art” decreased after-hours documentation for clinicians by 60%, reduced burnout by 82% and helped them focus more on patient care. Care teams can also track long-term patient health and better plan treatment for chronic conditions, while nurses can perform wound image analysis automatically with 72% greater precision than manual methods. At one hospital, AI review of routine chest X-rays led to earlier discovery of over 100 cases of lung cancer, increasing the detection rate to 70% compared to the 27% national average. To support back-end operations, organizations are using “Penny” to improve the revenue cycle — resulting in $3.4 million in additional revenue at one regional network services provider. Epic also developed “Emmie” to have conversational interactions with patients and more easily help them schedule appointments and ask questions. Epic is leveraging Azure Fabric for the Cosmos platform to bring together anonymized data from more than 300 million patients, including 13 million with rare diseases, so physicians can connect with peers who have treated similar cases to improve rare disease diagnosis and select the most effective treatment.
To reduce professional burnout and accelerate scale across the industry, Harvey built an AI platform to automate legal research, contract reviews and document analysis. Harvey Assistant assists attorney searches across large document sets to identify specific clauses or provisions within seconds instead of hours. To support large-scale analysis, Harvey Vault manages and analyzes up to 100,000 files per project for complex tasks like litigation, while Harvey Workflows automates routine yet critical tasks into smaller AI-managed steps. With the integration of the newly expanded Microsoft Word add-in, AI capabilities provide legal teams with the ability to edit 100-plus page documents with a single query, enabling centrally controlled document compliance reviews that enhance efficiency while reducing risk. With more than 74,000 legal professionals using the platform, Harvey is helping them streamline workflows, reduce administrative burden and combat attorney fatigue — with the average user saving up to 25 hours of time per month.
To revolutionize drug discovery, biotech company Insilico Medicine is leveraging AI across its entire development pipeline — from target identification to molecule design and clinical trials. The company created Pharma.AI to accelerate research while reducing costs and improving success rates in emerging novel therapies — with developmental candidate timelines reduced from 2.5-4.5 years to 9-18 months for more than 20 therapeutic programs. The integrated AI platforms built with Azure AI Foundry manage complex biological data, identify disease-relevant targets and advance candidates to clinical trials — accelerating research in what is traditionally a slow, costly and complex pharmaceutical R&D process. They enable researchers to analyze genetic data and identify drug targets with AI-generated reports to facilitate business case development; use physics-based models to evaluate candidates for potency, safety and synthesizability; integrate with specialized large language models for drug discovery; and combine AI agents with structured workflows to reduce document drafting time by over 85% while improving first-pass quality of scientific documents by 60%.
To enhance manufacturing operations in a fast-paced and complex industry, global consumer foods producer Kraft Heinz partnered with Microsoft to embed AI and machine learning across its production facilities, resulting in smarter decision-making and operational improvements. The company built an AI-powered platform — Plant Chat — providing real-time insights on the factory floor and reducing downtime to enable faster, more confident decision-making with proactive guidance. The solution analyzes over 300 variables and allows operators to interact via natural language to improve consistency, reduce guesswork, decrease waste and maintain compliance — even for less experienced operators. Since implementation and collectively with other initiatives, these efforts have resulted in a 40% reduction in supply-chain waste, a 20% increase in sales forecast accuracy and a 6% product-yield improvement across all North American manufacturing sites through the third quarter of 2024. Combined with further operational improvements, this work has yielded more than $1.1 billion in gross efficiencies from 2023 through the third quarter of 2024.
To redefine work and scale intelligent automation globally, digital native Manus AI developed an advanced autonomous AI system designed to understand user intent and execute complex workflows independently across various domains. The solution leverages a multi-agent architecture through Microsoft Azure AI Foundry to deliver scalable, versatile task automation for millions of users worldwide. Its Wide Research capability deploys specialized sub-agents to rapidly perform large-scale, multi-dimensional research tasks; saving significant time and delivering actionable insights to make complex analysis accessible and efficient for strategic decision-making. Manus AI can also build dynamic dashboards so organizations can visualize trends, anomalies and market insights in real-time; driving strategic planning with reliable, up-to-date information. The multimodel image editing and creation capabilities also allow users to support brand consistency and enable marketers and product teams to iterate rapidly.
To advance automotive innovation, stabilize supply chain volatility, simplify production complexity and meet sustainability demands, Mercedes-Benz scaled AI innovation across its global production network. The MO360 data platform connects over 30 car plants worldwide to the Microsoft Cloud, enabling real-time data access, global optimization and analytics. The Digital Factory Chatbot Ecosystem uses a multi-agent system to empower employees with collaborative insights, and Paint Shop AI leverages machine learning simulations to diagnose efficiency declines and reduce energy consumption of the buildings and machines — including 20% energy savings in the Rastatt paint shop. Using NVIDIA Omniverse on Azure, Mercedes-Benz created large-scale factory digital twins for visualization, testing and optimization of production lines — enabling agile planning and continuous improvement. The MBUX Virtual Assistant embedded in over 3 million vehicles, powered by Microsoft’s ChatGPT and Bing Search, offers natural, conversational voice interactions and integrates Microsoft 365 Copilot with Teams directly into vehicles to enable mobile workspaces.
U.S. stock exchange and financial services technology company Nasdaq integrated AI capabilities into its Nasdaq Boardvantage platform to help corporate governance teams and board members save time, reduce information overload, improve decision-making and enhance board meeting preparation and governance workflows. The board management platform is used by leadership teams at over 4,000 organizations worldwide to centralize activities like meeting planning, agenda building, decision support, resolution approval, voting and signatures. Using Azure OpenAI GPT-4o mini, the AI Summarization feature helps board secretaries significantly reduce manual effort, saving hundreds of hours annually with accuracy between 91% to 97%. AI Meeting Minutes helps governance teams draft minutes by processing agendas, documents and notes while allowing for customization of length, tone and anonymization; accelerating post-meeting workflows and saving up to five hours per meeting.
As customers seek to use AI more to shop and search for products, luxury lifestyle company Ralph Lauren developed a personal, frictionless, inspirational and accessible solution to blend fashion with cutting-edge AI. Working with Microsoft, Ralph Lauren developed Ask Ralph: an AI-powered conversational tool providing styling tips and outfit recommendations from across the Polo Ralph Lauren brand. Powered by Azure OpenAI, the AI tool uses a natural language search engine to adapt dynamically to specific language inputs and interpret user intent to improve accuracy. It supports complex queries with exploratory or nuanced information needs with contextual understanding; and can discern tone, satisfaction and intent to refine recommendations. The tool also picks up on cues like location-based insights or event-driven needs. With Ask Ralph, customers can now reimagine how they shop online by putting the brand’s unique and iconic take on style right into their own hands.
Industrial automation and digital transformation expert Rockwell Automation is integrating AI and advanced analytics into its products to help manufacturers adapt seamlessly to market changes, reduce risk and develop agentic AI capabilities to support innovation and growth. FactoryTalk Design Studio
Copilot, a cloud-based environment for programming, enables rapid updates to code for evolving production needs — reducing complex coding tasks from days to minutes. Rockwell’s digital twin software, Emulate3D®, creates physics-based models for virtual testing of automation code and AI, reducing costly real-world errors and production risks while cutting on-site commissioning times by 50%. With the integration of NVIDIA Omniverse — a collaborative, large-scale digital twin platform — users can perform multi-user factory design and testing to facilitate cross-disciplinary collaboration, address industry challenges and unlock opportunities through digital simulation before real-world deployment.
To enable a cleaner, more resilient energy future, Schneider Electric is powering AI-driven industry innovation by addressing grid stability and enterprise sustainability challenges. Built using Microsoft Azure, the company developed solutions for organizations to act faster and smarter while delivering measurable improvements in grid reliability and enterprise ESG management. Resource Advisor Copilot transforms raw ESG and energy data into actionable insights via natural language queries to support knowledge-based and system data questions; saving sustainability managers hundreds of hours annually in data analysis and reporting tasks in early testing. Grid AI Assistant allows operators to interact with complex grids using natural language to improve response times and accuracy during critical events; reducing outages by 40% and speeding up application deployment by 60%. Schneider Electric’s integration of AI tools reflects a strategic approach to digitally transforming energy management, addressing both operational resilience and sustainability imperatives.
To enhance personalized learning, streamline operations and support educators with innovative technology, the State of São Paulo’s Department of Education (SEDUC) partnered with Microsoft to equip schools with cloud and AI solutions — including Azure OpenAI, Microsoft 365, Azure and Dynamics 365. SEDUC is applying responsible AI solutions at scale to address sector priorities like delivering timely, high-quality formative feedback and reducing repetitive administrative work. With Essay Grader, teachers automate portions of grading and receive suggested feedback, freeing time for lesson design and individual support. With Question Grader, students can answer questions more openly with their own perspectives and reasoning while still receiving curated feedback typically reserved for extensive exams. By leveraging these AI-powered solutions, SEDUC is improving learning outcomes, boosting efficiency and strengthening teacher impact — anchored in equity, transparency and sound governance.
Australia’s leading telecommunications company, Telstra, is transforming its customer service operations to improve the experience for its customers and the people that serve them. One of the biggest pain points for teams is navigating multiple systems to identify and resolve a customer issue — leading to long handling times and reliance on how team members interpret various data sources. By leveraging AI solutions built on Azure OpenAI and Microsoft 365 Copilot, the company is enabling instant knowledge access and streamlined workflows. With One Sentence Summary, agents have a concise overview of customer interactions to improve efficiency and customer satisfaction — reducing call handling time by over one minute and repeat contacts by nearly 10%. Ask Telstra provides AI-generated responses from Telstra’s knowledge base in near real-time to assist agents with accurate product, plan and troubleshooting information across a wide variety of topics during calls; facilitating seamless agent-customer interactions with AI assistance.
As one of the largest leading global automakers, Toyota is pioneering AI intelligence in manufacturing with O-beya System: a multi-agent AI system simulating expert discussions virtually. Based on decades of engineering knowledge, the solution fosters a collaborative project management approach to enhance problem-solving and innovation in vehicle development while identifying key challenges to help analyze and diagnose problems. O-beya can auto-select AI agents in fields like fuel efficiency, drivability, noise and vibration, energy management and power management to pinpoint causes and suggest solutions. The system also offers interactive features; including prompt history, term explanations and creative summaries to further enable engineers to explore and validate mitigation strategies efficiently. The system leverages Microsoft Azure OpenAI, Azure AI Search and Azure Cosmos DB to analyze internal design data and help Toyota accelerate innovation, preserve institutional knowledge and resolve complex engineering issues faster. Since January 2024, over 800 powertrain engineers have accessed the system, utilizing it hundreds of times monthly across multiple business units.
As we seek to help our customers realize their AI ambitions, our mission remains unchanged: to empower every person and every organization on the planet to achieve more. We are at our best as a company when we put our technology to work for others. As you move forward on your AI journey, ask what AI can do for your organization and what it means to demand more from it. Leveraging the Microsoft portfolio, together we can do more to positively impact society; going beyond efficiency and productivity to solve for humanity’s biggest challenges. I look forward to partnering with you on your path to becoming Frontier.
The post Becoming Frontier: How human ambition and AI-first differentiation are helping Microsoft customers go further with AI appeared first on The Official Microsoft Blog.
Over the past few years, we have driven remarkable progress accelerating AI innovation together with our customers and partners. We are achieving efficiency and productivity at scale to shape industries and markets around the world. It is time to demand more of AI to solve humanity’s biggest challenges by democratizing intelligence, obsolescing the mundane and…
The post Becoming Frontier: How human ambition and AI-first differentiation are helping Microsoft customers go further with AI appeared first on The Official Microsoft Blog.Read More
The next chapter of the Microsoft–OpenAI partnership
Since 2019, Microsoft and OpenAI have shared a vision to advance artificial intelligence responsibly and make its benefits broadly accessible. What began as an investment in a research organization has grown into one of the most successful partnerships in our industry. As we enter the next phase of this partnership, we’ve signed a new definitive agreement that builds on our foundation, strengthens our partnership, and sets the stage for long-term success for both organizations.
First, Microsoft supports the OpenAI board moving forward with formation of a public benefit corporation (PBC) and recapitalization. Following the recapitalization, Microsoft holds an investment in OpenAI Group PBC valued at approximately $135 billion, representing roughly 27 percent on an as-converted diluted basis, inclusive of all owners – employees, investors, and the OpenAI Foundation. Excluding the impact of OpenAI’s recent funding rounds, Microsoft held a 32.5 percent stake on an as-converted basis in the OpenAI for-profit.
The agreement preserves key elements that have fueled this successful partnership – meaning OpenAI remains Microsoft’s frontier model partner and Microsoft continues to have exclusive IP rights and Azure API exclusivity until Artificial General Intelligence (AGI).
It also refines and adds new provisions that enable each company to independently continue advancing innovation and growth.
What has evolved:
- Once AGI is declared by OpenAI, that declaration will now be verified by an independent expert panel.
- Microsoft’s IP rights for both models and products are extended through 2032 and now include models post-AGI, with appropriate safety guardrails.
- Microsoft’s IP rights to research, defined as the confidential methods used in the development of models and systems, will remain until either the expert panel verifies AGI or through 2030, whichever is first. Research IP includes, for example, models intended for internal deployment or research only. Beyond that research IP does not include model architecture, model weights, inference code, finetuning code, and any IP related to data center hardware and software; and Microsoft retains these non-Research IP rights.
- Microsoft’s IP rights now exclude OpenAI’s consumer hardware.
- OpenAI can now jointly develop some products with third parties. API products developed with third parties will be exclusive to Azure. Non-API products may be served on any cloud provider.
- Microsoft can now independently pursue AGI alone or in partnership with third parties.
- If Microsoft uses OpenAI’s IP to develop AGI, prior to AGI being declared, the models will be subject to compute thresholds; those thresholds are significantly larger than the size of systems used to train leading models today.
- The revenue share agreement remains until the expert panel verifies AGI, though payments will be made over a longer period of time.
- OpenAI has contracted to purchase an incremental $250B of Azure services, and Microsoft will no longer have a right of first refusal to be OpenAI’s compute provider.
- OpenAI can now provide API access to US government national security customers, regardless of the cloud provider.
- OpenAI is now able to release open weight models that meet requisite capability criteria.
As we step into this next chapter of our partnership, both companies are better positioned than ever to continue building great products that meet real-world needs, and create new opportunity for everyone and every business.
The post The next chapter of the Microsoft–OpenAI partnership appeared first on The Official Microsoft Blog.
Since 2019, Microsoft and OpenAI have shared a vision to advance artificial intelligence responsibly and make its benefits broadly accessible. What began as an investment in a research organization has grown into one of the most successful partnerships in our industry. As we enter the next phase of this partnership, we’ve signed a new definitive…
The post The next chapter of the Microsoft–OpenAI partnership appeared first on The Official Microsoft Blog.Read More
Accelerating our commercial growth
Satya Nadella, Chairman and CEO, shared the below communication with Microsoft employees this morning.
We are in the midst of a tectonic AI platform shift, one that requires us to both manage and grow our at-scale commercial business today, while building the new frontier and executing flawlessly across both.
History shows that general purpose technologies like AI drive step changes in productivity and GDP growth, and we have a unique opportunity to help our customers and the world realize this promise.
Our success depends on enabling commercial and public sector customers and partners to combine their human capital with new AI capabilities to change the frontier of how they operate. To accelerate this, we will increasingly need to bring together sales, marketing, operations, and engineering to drive growth and strengthen our position as the partner of choice for AI transformation.
With this context, I have asked Judson Althoff to take on an expanded role as CEO of our commercial business. Over the past nine years, Judson has led our global sales organization and was the architect behind designing and building Microsoft Customer and Partner Solutions (MCAPS) into what it is today: the “number one seed” in the industry and our company’s most important growth engine.
Takeshi Numoto and his marketing team will join this new organization, with Takeshi reporting directly to Judson as CMO, while also continuing to report directly to me on all-up business models, planning, consumer marketing, and corporate brand and communications.
Our operations organization will also move to report to Judson. By bringing operations into the commercial business, we can tighten the feedback loop between what customers need and how we deliver and support them. Carolina Dybeck Happe will continue to report to me, as she works on our overall company transformation and continues to closely partner with Judson.
Additionally, Judson will lead a new commercial leadership team that brings together leaders from engineering, sales, marketing, operations, and finance to drive our product strategy and governance, GTM readiness, and sales motions with shared accountability for the rigor and executional excellence our customers expect.
This will also allow our engineering leaders and me to be laser focused on our highest ambition technical work—across our datacenter buildout, systems architecture, AI science, and product innovation—to lead with intensity and pace in this generational platform shift. Each one of us needs to be at our very best in terms of rapidly learning new skills, adopting new ways to work, and staying close to the metal to drive innovation across the entire stack!!
This isn’t just evolution, it’s reinvention, for each of us professionally and for Microsoft.
Satya
The post Accelerating our commercial growth appeared first on The Official Microsoft Blog.
Satya Nadella, Chairman and CEO, shared the below communication with Microsoft employees this morning. We are in the midst of a tectonic AI platform shift, one that requires us to both manage and grow our at-scale commercial business today, while building the new frontier and executing flawlessly across both. History shows that general purpose technologies…
The post Accelerating our commercial growth appeared first on The Official Microsoft Blog.
Read More
Introducing Microsoft Marketplace — Thousands of solutions. Millions of customers. One Marketplace.
A new breed of industry-leading company is taking shape — Frontier Firms. These organizations blend human ambition with AI-powered technology to reshape how innovation is scaled, work is orchestrated and value is created. They’re accelerating AI transformation to enrich employee experiences, reinvent customer engagement, reshape business processes and unlock creativity and innovation.
To empower customers in becoming Frontier, we’re excited to announce the launch of the reimagined Microsoft Marketplace, your trusted source for cloud solutions, AI apps and agents. This further realizes Marketplace as an extension of the Microsoft Cloud, where we collaborate with our partner ecosystem to bring their innovations to our customers globally. By offering a comprehensive catalog across cloud solutions and industries, Microsoft Marketplace accelerates the path to becoming a Frontier Firm. With today’s announcement, we are excited to share:
- The new Microsoft Marketplace, a single destination to find, try, buy and deploy cloud solutions, AI apps and agents. Azure Marketplace and Microsoft AppSource are now unified to simplify cloud and AI management. Available today in the US and coming soon to customers worldwide.
- Tens of thousands of cloud and industry solutions in the Marketplace catalog across a breadth of categories ranging from data and analytics to productivity and collaboration, in addition to industry-specific offerings.
- Over 3,000 AI apps and agents are newly available directly on Marketplace and in Microsoft products — from Azure AI Foundry to Microsoft 365 Copilot — with rapid provisioning within your Microsoft environment through industry standards like Model Context Protocol (MCP).
- Marketplace integrations with Microsoft’s channel ecosystem, empowering you to buy where and how you want — whether from your cloud service provider (CSP) or relying on a trusted partner to procure cloud and AI solutions on your behalf.
AI apps and agents for every use case
Microsoft Marketplace gives you access to thousands of AI apps and agents from our rich partner ecosystem designed to automate tasks, accelerate decision-making and unlock value across your business. With a new AI Apps and Agents category, you can easily and confidently find AI solutions that integrate with your organization’s existing Microsoft products.
“With Microsoft Marketplace, we reduced configuration time of AI apps from nearly 20 minutes to just 1 minute per instance. That efficiency boost has translated into increased productivity and lower operating costs. Marketplace is a strategic channel for Siemens, where we’ve seen an 8X increase in customer adoption. It’s a powerful platform for scaling both sides of our business.”
— Jeff Zobrist, VP Global Partner Ecosystem and Go To Market |
Siemens Digital Industries Software
Special thanks to these partners who are launching new AI offerings in Microsoft Marketplace today:
Comprehensive catalog across cloud solutions and industries
Microsoft Marketplace offers solutions across dozens of categories ranging from data and analytics to productivity and collaboration, in addition to industry-specific offerings. Microsoft Marketplace is a seamless extension of the Microsoft Cloud, uniting solutions integrated with Azure, Microsoft 365, Dynamics 365, Power Platform, Microsoft Security and more.
“The Microsoft Marketplace, in particular, helps us balance innovation with confidence by giving us access to trusted solutions that integrate seamlessly with our Azure environment — ultimately enabling us to move faster while staying true to our Five Principles.”
— Matthew Hillegas, Commercial Director – Infrastructure & Information Security |
Mars Inc.
For organizations with a Microsoft Azure Consumption Commitment, 100% of your purchase for any of the thousands of Azure benefit eligible solutions available on Marketplace continue to count toward your commitment. This helps you spend smarter to maximize your cloud and AI investments.
Integrated experience from discovery to deployment
Contextually relevant cloud solutions, AI apps and agents built by our partners are also available directly within Microsoft products — providing users, developers and IT practitioners with approved solutions in the flow of work. For example, Agent Store includes Copilot agents within the Microsoft 365 Copilot experience. The same applies for apps in Microsoft Teams, models and tools in Azure AI Foundry and future experiences including MCP servers.
By integrating offerings from Marketplace directly into the Microsoft Cloud, IT is equipped with management and control tools that enable both innovation and governance. When you acquire a Copilot agent or an app running on Azure from Microsoft Marketplace, it’s provisioned and distributed to team members aligned to your security and governance standards.
Powering partner growth
For our partners, Microsoft Marketplace sits at the center of how we work together. We’re continuously expanding its capabilities to help our partners drive growth — whether that means scaling through digital sales, deepening channel partnerships or landing transformative deals.
We’ve invested in multiparty private offers, CSP integration and CSP private offers to connect software development companies and channel partners on Marketplace, creating more complete solutions to address customers’ needs. Today, we’re excited to share that valued partners including Arrow, Crayon, Ingram Micro, Pax8 and TD SYNNEX are integrating Microsoft Marketplace into their marketplaces, further extending customer reach.
Additionally, a new Marketplace capability called resale enabled offers is now in private preview. This empowers software companies to authorize their channel partners to sell on their behalf through private offers — unlocking new routes to market.
“We’re incredibly excited about the path forward with Microsoft. This integration with the Marketplace catalog is just the beginning — we see endless potential to co-innovate and help customers navigate their AI-first transformation with confidence.”
— Melissa Mulholland, Co-CEO | SoftwareOne and Crayon
Nicole Dezen, Chief Partner Officer and Corporate Vice President, Global Channel Partner Sales at Microsoft, shares more details about the partner opportunity with Microsoft Marketplace in her blog.
Becoming Frontier with Microsoft Marketplace
Whether you’re seeking to accelerate innovation, empower your teams with AI or unlock new value through trusted partners, Microsoft Marketplace brings together the solutions, expertise and ecosystem to meet your business needs. Explore the new Microsoft Marketplace. Thousands of solutions. Millions of customers. One Marketplace.
Alysa Taylor is the Chief Marketing Officer for Commercial Cloud and AI at Microsoft, leading teams that enable digital and AI transformation for organizations of all sizes across the globe. She is at the forefront of helping organizations around the world harness digital and AI innovation to transform how they operate and grow.
NOTE
Source: Work Trend Index Annual Report, 2025: The year the Frontier Firm is born, April 23, 2025
The post Introducing Microsoft Marketplace — Thousands of solutions. Millions of customers. One Marketplace. appeared first on The Official Microsoft Blog.
A new breed of industry-leading company is taking shape — Frontier Firms. These organizations blend human ambition with AI-powered technology to reshape how innovation is scaled, work is orchestrated and value is created. They’re accelerating AI transformation to enrich employee experiences, reinvent customer engagement, reshape business processes and unlock creativity and innovation. To empower customers…
The post Introducing Microsoft Marketplace — Thousands of solutions. Millions of customers. One Marketplace. appeared first on The Official Microsoft Blog.
Read More
Inside the world’s most powerful AI datacenter
This week we have introduced a wave of purpose-built datacenters and infrastructure investments we are making around the world to support the global adoption of cutting-edge AI workloads and cloud services.
Today in Wisconsin we introduced Fairwater, our newest US AI datacenter, the largest and most sophisticated AI factory we’ve built yet. In addition to our Fairwater datacenter in Wisconsin, we also have multiple identical Fairwater datacenters under construction in other locations across the US.
In Narvik, Norway, Microsoft announced plans with nScale and Aker JV to develop a new hyperscale AI datacenter.
In Loughton, UK, we announced a partnership with nScale to build the UK’s largest supercomputer to support services in the UK.
These AI datacenters are significant capital projects, representing tens of billions of dollars of investments and hundreds of thousands of cutting-edge AI chips, and will seamlessly connect with our global Microsoft Cloud of over 400 datacenters in 70 regions around the world. Through innovation that can enable us to link these AI datacenters in a distributed network, we multiply the efficiency and compute in an exponential way to further democratize access to AI services globally.
So what is an AI datacenter?
The AI datacenter: the new factory of the AI era

An AI datacenter is a unique, purpose-built facility designed specifically for AI training as well as running large-scale artificial intelligence models and applications. Microsoft’s AI datacenters power OpenAI, Microsoft AI, our Copilot capabilities and many more leading AI workloads.
The new Fairwater AI datacenter in Wisconsin stands as a remarkable feat of engineering, covering 315 acres and housing three massive buildings with a combined 1.2 million square feet under roofs. Constructing this facility required 46.6 miles of deep foundation piles, 26.5 million pounds of structural steel, 120 miles of medium-voltage underground cable and 72.6 miles of mechanical piping.
Unlike typical cloud datacenters, which are optimized to run many smaller, independent workloads such as hosting websites, email or business applications, this datacenter is built to work as one massive AI supercomputer using a single flat networking interconnecting hundreds of thousands of the latest NVIDIA GPUs. In fact, it will deliver 10X the performance of the world’s fastest supercomputer today, enabling AI training and inference workloads at a level never before seen.
The role of our AI datacenters – powering frontier AI
Effective AI models rely on thousands of computers working together, powered by GPUs, or specialized AI accelerators, to process massive concurrent mathematical computations. They’re interconnected with extremely fast networks so they can share results instantly, and all of this is supported by enormous storage systems that hold the data (like text, images or video) broken down into tokens, the small units of information the AI learns from. The goal is to keep these chips busy all the time, because if the data or the network can’t keep up, everything slows down.
The AI training itself is a cycle: the AI processes tokens in sequence, makes predictions about the next one, checks them against the right answers and adjusts itself. This repeats trillions of times until the system gets better at whatever it’s being trained to do. Think of it like a professional football team’s practice. Each GPU is a player running a drill, the tokens are the plays being executed step by step, and the network is the coaching staff, shouting instructions and keeping everyone in sync. The team repeats plays over and over, correcting mistakes until they can execute them perfectly. By the end, the AI model, like the team, has mastered its strategy and is ready to perform under real game conditions.
AI infrastructure at frontier scale
Purpose-built infrastructure is critical to being able to power AI efficiently. To compute the token math at this trillion-parameter scale of leading AI models, the core of the AI datacenter is made up of dedicated AI accelerators (such as GPUs) mounted on server boards alongside CPUs, memory and storage. A single server hosts multiple GPU accelerators, connected for high-bandwidth communication. These servers are then installed into a rack, with top-of-rack (ToR) switches providing low-latency networking between them. Every rack in the datacenter is interconnected, creating a tightly coupled cluster. From the outside, this architecture looks like many independent servers, but at scale it functions as a single supercomputer where hundreds of thousands of accelerators can train a single model in parallel.
This datacenter runs a single, massive cluster of interconnected NVIDIA GB200 servers and millions of compute cores and exabytes of storage, all engineered for the most demanding AI workloads. Azure was the first cloud provider to bring online the NVIDIA GB200 server, rack and full datacenter clusters. Each rack packs 72 NVIDIA Blackwell GPUs, tied together in a single NVLink domain that delivers 1.8 terabytes of GPU-to-GPU bandwidth and gives every GPU access to 14 terabytes of pooled memory. Rather than behaving like dozens of separate chips, the rack operates as a single, giant accelerator, capable of processing an astonishing 865,000 tokens per second, the highest throughput of any cloud platform available today. The Norway and UK AI datacenters will use similar clusters, and take advantage of NVIDIAs next AI chip design (GB300) which offers even more pooled memory per rack.
The challenge in establishing supercomputing scale, particularly as AI training requirements continue to require breakthrough scales of computing, is getting the networking topology just right. To ensure low latency communication across multiple layers in a cloud environment, Microsoft needed to extend performance beyond a single rack. For the latest NVIDIA GB200 and GB300 deployments globally, at the rack level these GPUs communicate over NVLink and NVSwitch at terabytes per second, collapsing memory and bandwidth barriers. Then to connect across multiple racks into a pod, Azure uses both InfiniBand and Ethernet fabrics that deliver 800 Gbps, in a full fat tree non-blocking architecture to ensure that every GPU can talk to every other GPU at full line rate without congestion. And across the datacenter, multiple pods of racks are interconnected to reduce hop counts and enable tens of thousands of GPUs to function as one global-scale supercomputer.
When laid out in a traditional datacenter hallway, physical distance between racks introduces latency into the system. To address this, the racks in the Wisconsin AI datacenter are laid out in a two-story datacenter configuration, so in addition to racks networked to adjacent racks, they are networked to additional racks above or below them.
This layered approach sets Azure apart. Microsoft Azure was not just the first cloud to bring GB200 online at rack and datacenter scale; we’re doing it at massive scale with customers today. By co-engineering the full stack with the best from our industry partners coupled with our own purpose-built systems, Microsoft has built the most powerful, tightly coupled AI supercomputer in the world, purpose-built for frontier models.

Addressing the environmental impact: closed loop liquid cooling at facility scale
Traditional air cooling can’t handle the density of modern AI hardware. Our datacenters use advanced liquid cooling systems — integrated pipes circulate cold liquid directly into servers, extracting heat efficiently. The closed-loop recirculation ensures zero water waste, with water only needed to fill up once and then it is continually reused.
By designing purpose-built AI datacenters, we were able to build liquid cooling infrastructure into the facility directly to get us more rack-density in the datacenter. Fairwater is supported by the second largest water-cooled chiller plant on the planet and will continuously circulate water in its closed loop cooling system. The hot water is then piped out to the cooling “fins” on each side of the datacenter, where 172 20-foot fans chill and recirculate the water back to the datacenter. This system keeps the AI datacenter running efficiently, even at peak loads.

Over 90% of our datacenter capacity uses this system, requiring water only once during construction and continually reusing it with no evaporation losses. The remaining 10% of traditional servers use outdoor air for cooling, switching to water only during the hottest days, a design that dramatically reduces water usage compared to traditional datacenters.
We’re also using liquid cooling to support AI workloads in many of our existing datacenters; this liquid cooling is accomplished with Heat Exchanger Units (HXUs) that also operate with zero-operational water use.
Storage and compute: Built for AI velocity
Modern datacenters can contain exabytes of storage and millions of CPU compute scores. To support the AI infrastructure cluster, an entirely separate datacenter infrastructure is needed to store and process the data used and generated by the AI cluster. To give you an example of the scale — the Wisconsin AI datacenter’s storage systems are five football fields in length!

We reengineered Azure storage for the most demanding AI workloads, across these massive datacenter deployments for true supercomputing scale. Each Azure Blob Storage account can sustain over 2 million read/write transactions per second, and with millions of accounts available, we can elastically scale to meet virtually any data requirement.
Behind this capability is a fundamentally rearchitected storage foundation that aggregates capacity and bandwidth across thousands of storage nodes and hundreds of thousands of drives. This enables scale to exabyte scale storage, eliminating the need for manual sharding and simplifying operations for even the largest AI and analytics workloads.
Key innovations such as BlobFuse2 deliver high-throughput, low-latency access for GPU node-local training, ensuring that compute resources are never idle and that massive AI training datasets are always available when needed. Multiprotocol support allows seamless integration with diverse data pipelines, while deep integration with analytics engines and AI tools accelerates data preparation and deployment.
Automatic scaling dynamically allocates resources as demand grows, combined with advanced security, resiliency and cost-effective tiered storage, Azure’s storage platform sets the pace for next-generation workloads, delivering the performance, scalability and reliability required.
AI WAN: Connecting multiple datacenters for an even larger AI supercomputer
These new AI datacenters are part of a global network of Azure AI datacenters, interconnected via our Wide Area Network (WAN). This isn’t just about one building, it’s about a distributed, resilient and scalable system that operates as a single, powerful AI machine. Our AI WAN is built with growth capabilities in AI-native bandwidth scales to enable large-scale distributed training across multiple, geographically diverse Azure regions, thus allowing customers to harness the power of a giant AI supercomputer.
This is a fundamental shift in how we think about AI supercomputers. Instead of being limited by the walls of a single facility, we’re building a distributed system where compute, storage and networking resources are seamlessly pooled and orchestrated across datacenter regions. This means greater resiliency, scalability and flexibility for customers.
Bringing it all together
To meet the critical needs of the largest AI challenges, we needed to redesign every layer of our cloud infrastructure stack. This isn’t just about isolated breakthroughs, but composing multiple new approaches across silicon, servers, networks and datacenters, leading to advancements where software and hardware are optimized as one purpose-built system.
Microsoft’s Wisconsin datacenter will play a critical role in the future of AI, built on real technology, real investment and real community impact. As we connect this facility with other regional datacenters, and as every layer of our infrastructure is harmonized as a complete system, we’re unleashing a new era of cloud-powered intelligence, secure, adaptive and ready for what’s next.
To learn more about Microsoft’s datacenter innovations, check out the virtual datacenter tour at datacenters.microsoft.com.
Scott Guthrie is responsible for hyperscale cloud computing solutions and services including Azure, Microsoft’s cloud computing platform, generative AI solutions, data platforms and information and cybersecurity. These platforms and services help organizations worldwide solve urgent challenges and drive long-term transformation.
The post Inside the world’s most powerful AI datacenter appeared first on The Official Microsoft Blog.
This week we have introduced a wave of purpose-built datacenters and infrastructure investments we are making around the world to support the global adoption of cutting-edge AI workloads and cloud services. Today in Wisconsin we introduced Fairwater, our newest US AI datacenter, the largest and most sophisticated AI factory we’ve built yet. In addition to…
The post Inside the world’s most powerful AI datacenter appeared first on The Official Microsoft Blog.
Read More
Microsoft leads shift beyond data unification to organization, delivering next-gen AI readiness with new Microsoft Fabric capabilities
We’re in a hinge moment for AI. The experiments are over and the real work has begun. Centralizing data, once the finish line, is now the starting point. The definition of “AI readiness” is evolving as increasingly sophisticated agents demand rich, contextualized data grounded in business operations to deliver meaningful results. What sets leaders apart is the quality of the data platform experience in delivering on the shared meaning, live context and interactivity that helps systems understand the business as it is, not just as a static report. Across industries, frontier firms are dissolving silos and equipping teams with AI agents and reasoning systems that go beyond answers to help people build, explore, decide and act. The result: a new rhythm of work that’s faster, more connected, more explainable and closer to the customer.
Microsoft Fabric: Powering AI‑Ready data innovation enterprise‑wide at FabCon Europe
As the first hyperscaler to fully embrace this paradigm, Microsoft is introducing new capabilities in its fastest-growing data and analytics platform, Microsoft Fabric, at the European Microsoft Fabric Community Conference (FabCon). With Fabric, we are bringing together all of an organization’s data into a single, AI‑ready foundation so every team can turn data into actionable insight with the full context of their business. At FabCon, Microsoft is announcing a major leap forward in its delivery of AI data readiness with Graph in Fabric, a low/no-code platform for modeling and analyzing relationships across enterprise data; and Maps in Fabric, which joins the recently launched digital twin builder in Microsoft Fabric as part of Real-Time Intelligence and brings geospatial analytics into Fabric, enabling users to visualize and enrich location-based data at scale.
We’re also expanding Fabric’s capabilities further with new OneLake shortcuts and mirroring sources, a Graph database connecting entities across OneLake, enhanced developer experiences and new security controls — providing everything needed to run mission-critical scenarios on Fabric.
These capabilities mark a fundamental evolution in data strategy for business leaders scaling intelligent AI applications and agents across their organizations.
Train smarter agents with Graph and Maps
The foundation of every successful AI agent isn’t just data — it’s organized knowledge. As businesses accelerate into the AI era, the challenge isn’t gathering more information, but structuring it so agents can reason, connect and act with purpose.
The previews of Graph and Maps in Fabric are designed to help businesses organize their raw data for real-world impact. Graph in Fabric draws on the graph design principles proven at LinkedIn to reveal connections across customers, partners and supply chains, enabling organizations to visualize and query relationships that drive business outcomes.
Maps in Fabric brings geospatial analytics, empowering teams to make location-aware decisions as they respond to operational challenges in real time.
But these aren’t just technical milestones, they’re strategic tools for business leaders. AI is sparking new cross-company collaboration by connecting enterprise data — uniting business functions, accelerating decisions and empowering teams to share and scale value through open data flow. Whether it’s mapping supply chain dependencies or visualizing customer journeys, Graph and Maps help businesses move from isolated data points to a connected, actionable foundation for AI.
Discover how Graph and Maps in Fabric unlock real-time intelligence for AI-driven operations. Get the engineering inside scoop from Corporate Vice President of Messaging and Real-Time Analytics, Yitzhak Kesselman, in his latest blog: “The Foundation for Powering AI-Driven Operations.”
Enhancing developer experiences across Fabric to accelerate AI projects
Fabric is quickly becoming the go-to platform for data developers worldwide. To fuel that momentum, we’re rolling out new tools that make it easier to build, automate and innovate.
The new Fabric Extensibility Toolkit simplifies architecture and automation — so every solution is secure, scalable and aligned to business needs. And with the preview of Fabric Model Context Protocol (MCP) developers can tap into AI-assisted code generation and item authoring right inside familiar environments like Visual Studio Code and GitHub Codespaces.
These updates aren’t just for software developers. They’re for any business leader ready to turn organized data into competitive advantage. Fabric helps teams move from experimentation to enterprise-scale impact, with speed and governance built in.
OneLake: The AI-Ready data foundation
OneLake is the unified data lake at the heart of Fabric. It’s designed to ingest data once and make it instantly usable across analytics, AI and applications to accelerate insight. Today, we’re introducing new features to give teams unprecedented visibility and control with OneLake.
With the addition of mirroring capabilities for Oracle and Google BigQuery, expanded support for data agents and OneLake shortcuts to Azure Blob Storage, organizations can bring all their data together, no matter where it lives.
OneLake shortcut transformations can now convert JSON and Parquet files to Delta tables for instant analysis. OneLake also offers secure governance tools, including a new Secure tab in the catalog for managing permissions and a Govern tab for data oversight.
We’re also releasing the Azure AI Search integration with OneLake. By making this available in the Azure AI Foundry portal, we’re streamlining the experience for developers and data teams, helping them build smarter, more context-aware agents faster.
Our OneLake Table API preview allows apps to discover and inspect tables using Fabric’s security model, and OneLake diagnostics, enabling workspace owners to capture all data activity and storage operations.
Microsoft Fabric and Azure AI Foundry: A complete data, AI and agent ecosystem
In the AI era, every project is a data project, and success depends on reducing complexity. Microsoft is addressing this head-on by continuing to natively integrate Fabric and Azure AI Foundry together to help simplify how enterprises design, customize and manage AI apps and agents.
Fabric provides a single way to reason over data wherever it resides, delivering the structured, contextualized foundation AI needs. On top of that foundation, Azure AI Foundry enables developers to work with their favorite tools, including GitHub, Visual Studio and Copilot Studio, to efficiently build and scale AI applications and agents, while giving IT leaders visibility into performance, governance and ROI.
By bringing data, models and operations together, Fabric and Azure AI Foundry help businesses accelerate innovation and align AI initiatives with strategic goals. This unified approach eliminates complexity, speeds adoption and creates a platform-first advantage so organizations can unlock new value from their data and lead in the next generation of AI readiness.
Build the foundation, lead the future
The organizations leading this next chapter aren’t just deploying AI, they’re engineering for it. That starts with a foundation where data is unified, governed and now enriched with context so AI apps and agents can act confidently and scale without friction. Graph and Maps, enhanced developer tools, OneLake improvements and integration with Azure AI Foundry push Microsoft Fabric past data unification into AI‑ready, context‑rich data built for tomorrow’s AI challenges.
Those organizations are also skilling up. Thousands of Fabric users have passed their exams to achieve more than 50,000 certifications collectively for Foundry, Fabric Analytics Engineers and Fabric Data Engineers roles.
The future of AI belongs to platforms, not point solutions — ecosystems that connect data, intelligence and action. With that foundation, every agent, app and insight compounds value. Microsoft delivers that platform today, helping organizations unlock new levels of intelligence and impact.
Explore the full spectrum of new features coming to Fabric in today’s blog from Arun Ulagaratchagan, Corporate Vice President of Azure Data: “FabCon Vienna: Build data-rich agents on an enterprise-ready foundation.”
The post Microsoft leads shift beyond data unification to organization, delivering next-gen AI readiness with new Microsoft Fabric capabilities appeared first on The Official Microsoft Blog.
We’re in a hinge moment for AI. The experiments are over and the real work has begun. Centralizing data, once the finish line, is now the starting point. The definition of “AI readiness” is evolving as increasingly sophisticated agents demand rich, contextualized data grounded in business operations to deliver meaningful results. What sets leaders apart…
The post Microsoft leads shift beyond data unification to organization, delivering next-gen AI readiness with new Microsoft Fabric capabilities appeared first on The Official Microsoft Blog.Read More
A joint statement from Microsoft and OpenAI
Microsoft and OpenAI have signed a non-binding memorandum of understanding (MOU) for the next phase of our partnership. We are actively working to finalize contractual terms in a definitive agreement. Together, we remain focused on delivering the best AI tools for everyone, grounded in our shared commitment to safety.
The post A joint statement from Microsoft and OpenAI appeared first on The Official Microsoft Blog.
Microsoft and OpenAI have signed a non-binding memorandum of understanding (MOU) for the next phase of our partnership. We are actively working to finalize contractual terms in a definitive agreement. Together, we remain focused on delivering the best AI tools for everyone, grounded in our shared commitment to safety.
The post A joint statement from Microsoft and OpenAI appeared first on The Official Microsoft Blog.Read More
Flexible work update
Amy Coleman, Executive Vice President and Chief People Officer, shared the below communication with Microsoft employees this morning.
How we work has forever changed. I remember starting at Microsoft in the late ‘90s, always in the office, no laptops, and primarily working with the people right down the hall. As technology evolved and our business expanded, we became more open, more global, and able to scale in ways we couldn’t have imagined. Then the pandemic reshaped everything. It pushed us to think differently about work, to connect like never before (thank you Teams!), reminded us of how much we value being together, and gave us focus and autonomy in the traditional workday. We’re not going back, and we shouldn’t. Instead, we should take the best of what we’ve learned and move forward.
In the AI era, we are moving faster than ever, building world-class technology that changes how people live and work, and how organizations everywhere operate. If you reflect on our history, the most meaningful breakthroughs happen when we build on each other’s ideas together, in real time.
We’ve looked at how our teams work best, and the data is clear: when people work together in person more often, they thrive — they are more energized, empowered, and they deliver stronger results. As we build the AI products that will define this era, we need the kind of energy and momentum that comes from smart people working side by side, solving challenging problems together.
With that in mind, we’re updating our flexible work expectations to three days a week in the office.
We’ll roll this out in three phases: 1) starting in Puget Sound at the end of February; 2) expanding to other US locations; 3) then launching outside the US.
Our goal with this change is to provide more clarity and consistency in how we come together, while maintaining the flexibility we know you value. We want you to continue to shape your schedule in ways that work best for you, making in-person time intentional and impactful. Importantly, this update is not about reducing headcount. It’s about working together in a way that enables us to meet our customers’ needs.
For some of you, this is not a change. For others this may be a bigger adjustment, which is exactly why we’re providing time to plan thoughtfully. As part of these updates, we’re also enhancing our workplace safety and security measures so we can continue to provide a workplace where every employee can do their best work.
What you need to know:
Puget Sound-area employees: If you live within 50 miles of a Microsoft office, you’ll be expected to work onsite three days a week by the end of February 2026. You’ll receive a personalized email today with more details. Please connect with your manager and team to understand your organization’s plans. If needed, you can request an exception by Friday, September 19.
Managers: You’ll find actions to take, and the resources to support both you and your team on the Managers@Microsoft SharePoint.
All employees: You’ll hear from your EVP or organizational leadership today with specific guidance. Each business will do what is best for their team, which means some groups will deviate from our company-wide expectations. If you are outside of the Puget Sound area, you do not need to take any action at this time unless your EVP communicates otherwise.
Timelines and details for additional US office locations will be announced soon. For employees outside the United States, we will begin planning in 2026. More information is available on the Flexible Work at Microsoft SharePoint.
As always, we’ll keep learning together to ensure Microsoft is the best place for you to grow and have a great career. Let’s keep moving forward together.
Thank you,
Amy
The post Flexible work update appeared first on The Official Microsoft Blog.
Amy Coleman, Executive Vice President and Chief People Officer, shared the below communication with Microsoft employees this morning. How we work has forever changed. I remember starting at Microsoft in the late ‘90s, always in the office, no laptops, and primarily working with the people right down the hall. As technology evolved and our business expanded, we became…
The post Flexible work update appeared first on The Official Microsoft Blog.Read More
Accelerating AI adoption for the US government
Today, Microsoft and the US General Services Administration (GSA) announced a comprehensive agreement to bring a suite of productivity, cloud and AI services, including Microsoft 365 Copilot at no cost for up to 12 months for millions of existing Microsoft G5 users, to help agencies rapidly adopt secure and compliant advanced AI tools that will enhance operations, strengthen security and accelerate innovation for the American people. As an unparalleled milestone in advancing GSA’s OneGov strategy, Microsoft’s offerings will be available through a governmentwide unified pricing strategy that is expected to drive $3 billion in cost savings in the first year alone.
Enabling AI innovation and acceleration for federal agencies
This expansive offering will help agencies achieve key pillars of the America’s AI Action Plan by enabling federal agencies to serve at the forefront on driving AI innovation and adoption in service to the American people. Through this agreement federal agencies will access the latest AI capabilities at scale, now integrated in many of the products they already use, to achieve key administration priorities:
- Transforming productivity with AI: A unique Microsoft 365 and Copilot suite, offered exclusively to the federal government, enables agencies to automate workflows, analyze data and collaborate more efficiently, freeing public servants to focus on their core mission.
- Driving automation with AI agents: With AI agents, and no per-agent fees, agencies can build solutions for citizen inquiries, case management and contact centers, extending the reach and responsiveness of government services.
- Accelerating cloud modernization: With significant Azure discounts and the waiving of data egress fees, agencies can modernize infrastructure, reduce barriers to interagency collaboration and unlock the full power of advanced analytics and AI.
- Streamlining government operations: Dynamics 365 applications help agencies enhance citizen service, optimize supply chains and increase field responsiveness, directly impacting everyday public outcomes.
- Strengthening security across all levels: Integrated platforms such as Microsoft Entra ID and Sentinel provide advanced identity and threat protection, supporting the Zero Trust journey across federal environments.
Federal agencies can opt-in to any or all of these offers through September 2026, with discounted pricing available for up to 36 months.
Innovation meets security
Agencies can quickly adopt these solutions knowing these services have already achieved key FedRAMP security and compliance authorizations, meeting more than 400 critical security controls established in NIST 800-53 standards. Microsoft 365, Azure and our key AI services are authorized at FedRAMP High. Microsoft 365 Copilot received provisional authorization from the US Department of Defense, with FedRAMP High expected soon.
Investing for the future
Our commitment goes beyond technology and savings. Microsoft is also committing $20 million in additional support services to help agencies implement the offers and maximize the value of these services, along with complimentary cost-optimization workshops that will enable agencies to identify opportunities to reduce software duplication, automate services and improve cross-team interoperability. These investments reflect our belief that technology’s greatest value lies in its ability to empower people.
Taken together, we anticipate these services have the potential to deliver more than $6 billion in total estimated value over three years.
For more than four decades, Microsoft has been privileged to support the US government’s most vital missions. Today, as we stand at the forefront of the AI era, we reaffirm our dedication to serving as a trusted partner — one that listens, innovates responsibly and shares in the mission to advance the nation’s public good. We look forward to the next chapter helping agencies harness secure AI and cloud solutions to build a stronger, more resilient and more innovative future for all.
To learn how to take advantage of these offers, contact your Microsoft representative or authorized reseller*. For any additional questions, you can email our Microsoft OneGov team.
*Microsoft OneGov offers are applicable to Microsoft federal customers with Enterprise Agreements and exclude AOS-G and CSP programs; Azure Consumption Discounts and waived egress fees applicable to select Governmentwide Acquisition Contracts.
The post Accelerating AI adoption for the US government appeared first on The Official Microsoft Blog.
Today, Microsoft and the US General Services Administration (GSA) announced a comprehensive agreement to bring a suite of productivity, cloud and AI services, including Microsoft 365 Copilot at no cost for up to 12 months for millions of existing Microsoft G5 users, to help agencies rapidly adopt secure and compliant advanced AI tools that will enhance…
The post Accelerating AI adoption for the US government appeared first on The Official Microsoft Blog.Read More












