Category: News
Copilot Chat Arrives in Microsoft 365 Apps
A Logical Progression for Copilot in the Microsoft 365 Apps
The news in message center notification MC1096218 (last updated 17 September 2025) about the rollout of Copilot Chat confirms the worst fears of some that Microsoft is on a one-way path to stuffing Copilot into as many places as it can. Well, that feeling is backed by some truth, but in this case, I think the change is a natural progression of Copilot’s existing presence in apps like Word, where it’s been producing document summaries since last year.
Once Copilot appeared in the Office apps, there was only one way forward, and that wasn’t to see Copilot disappear from Office. Now Copilot Chat is available in Word, Excel, and PowerPoint, just like it has been available in Outlook (new and classic) for a while. Microsoft says that the rollout is expected to complete in the coming weeks, which basically means that it will turn up when the stars align in terms of desktop client Office build and server infrastructure.
Copilot Chat for All
Copilot Chat is available for any user of the Microsoft 365 apps, with or without a Microsoft 365 Copilot license. The difference is that those with Microsoft 365 Copilot licenses can access tenant resources like documents stored in SharePoint Online and OneDrive for Business while those without are restricted to web queries (via Bing search).
Working in Copilot and an Office App
The idea behind the side-by-side implementation is that users can work on a file in the main pane while being able to interact with Copilot in a side pane (Figure 1). It’s a useful feature that makes it easy to take questions from the main file, research them in Copilot, and take the results back into the file.

Apart from anything else, integrating Copilot so tightly into the Office apps makes it less likely that users will seek AI assistance elsewhere and potentially end up uploading documents from SharePoint and OneDrive to services like ChatGPT. It also encourages people to consider upgrading from the free Microsoft Copilot to the full-feature and more expensive Microsoft 365 Copilot.
Word Action Button for Microsoft 365 Copilot Chat
After Outlook, Word is easily the Office app where I spend most time. The announcement in message center notification MC1143298 (last updated 17 September 2025) that an Open in Word action button will soon be available to move text from Copilot to Word is therefore very interesting.
It’s possible to move content from Copilot to Word now using Copilot pages as an interim step. Copilot pages are built from Loop, so the intention is that the content is worked on in Loop after coming from Copilot rather than being exported to a new app. At this point, Word is a more sophisticated word processing tool than Loop is. Given the use cases for the two apps, this is the natural state of affairs. I seldom need to collaborate with others to write articles or book text. Being able to move content from Copilot to Word is an action I shall check out once it becomes available later this month.
Teams Move to the Unified Microsoft 365 Apps Domain
Before closing for the weekend, a little bird tells me that Teams might soon move from its teams.microsoft.com domain to teams.cloud.microsoft as part of the initiative launched by Microsoft to create a unified domain for Microsoft 365 apps.
In March 2024, Microsoft posted a note for developers to tell them that Teams apps needed to be able to use teams.cloud.microsoft. By this point, I’m sure that most ISVs will have updated their apps, but if your tenant has some custom home-grown Teams apps, it’s worthwhile checking with the developers that the apps are ready to accommodate the domain switch. Who wants to be surprised when the switch happens?
Support the work of the Office 365 for IT Pros team by subscribing to the Office 365 for IT Pros eBook. Your support pays for the time we need to track, analyze, and document the changing world of Microsoft 365 and Office 365. Only humans contribute to our work!
Inside the world’s most powerful AI datacenter
This week we have introduced a wave of purpose-built datacenters and infrastructure investments we are making around the world to support the global adoption of cutting-edge AI workloads and cloud services.
Today in Wisconsin we introduced Fairwater, our newest US AI datacenter, the largest and most sophisticated AI factory we’ve built yet. In addition to our Fairwater datacenter in Wisconsin, we also have multiple identical Fairwater datacenters under construction in other locations across the US.
In Narvik, Norway, Microsoft announced plans with nScale and Aker JV to develop a new hyperscale AI datacenter.
In Loughton, UK, we announced a partnership with nScale to build the UK’s largest supercomputer to support services in the UK.
These AI datacenters are significant capital projects, representing tens of billions of dollars of investments and hundreds of thousands of cutting-edge AI chips, and will seamlessly connect with our global Microsoft Cloud of over 400 datacenters in 70 regions around the world. Through innovation that can enable us to link these AI datacenters in a distributed network, we multiply the efficiency and compute in an exponential way to further democratize access to AI services globally.
So what is an AI datacenter?
The AI datacenter: the new factory of the AI era

An AI datacenter is a unique, purpose-built facility designed specifically for AI training as well as running large-scale artificial intelligence models and applications. Microsoft’s AI datacenters power OpenAI, Microsoft AI, our Copilot capabilities and many more leading AI workloads.
The new Fairwater AI datacenter in Wisconsin stands as a remarkable feat of engineering, covering 315 acres and housing three massive buildings with a combined 1.2 million square feet under roofs. Constructing this facility required 46.6 miles of deep foundation piles, 26.5 million pounds of structural steel, 120 miles of medium-voltage underground cable and 72.6 miles of mechanical piping.
Unlike typical cloud datacenters, which are optimized to run many smaller, independent workloads such as hosting websites, email or business applications, this datacenter is built to work as one massive AI supercomputer using a single flat networking interconnecting hundreds of thousands of the latest NVIDIA GPUs. In fact, it will deliver 10X the performance of the world’s fastest supercomputer today, enabling AI training and inference workloads at a level never before seen.
The role of our AI datacenters – powering frontier AI
Effective AI models rely on thousands of computers working together, powered by GPUs, or specialized AI accelerators, to process massive concurrent mathematical computations. They’re interconnected with extremely fast networks so they can share results instantly, and all of this is supported by enormous storage systems that hold the data (like text, images or video) broken down into tokens, the small units of information the AI learns from. The goal is to keep these chips busy all the time, because if the data or the network can’t keep up, everything slows down.
The AI training itself is a cycle: the AI processes tokens in sequence, makes predictions about the next one, checks them against the right answers and adjusts itself. This repeats trillions of times until the system gets better at whatever it’s being trained to do. Think of it like a professional football team’s practice. Each GPU is a player running a drill, the tokens are the plays being executed step by step, and the network is the coaching staff, shouting instructions and keeping everyone in sync. The team repeats plays over and over, correcting mistakes until they can execute them perfectly. By the end, the AI model, like the team, has mastered its strategy and is ready to perform under real game conditions.
AI infrastructure at frontier scale
Purpose-built infrastructure is critical to being able to power AI efficiently. To compute the token math at this trillion-parameter scale of leading AI models, the core of the AI datacenter is made up of dedicated AI accelerators (such as GPUs) mounted on server boards alongside CPUs, memory and storage. A single server hosts multiple GPU accelerators, connected for high-bandwidth communication. These servers are then installed into a rack, with top-of-rack (ToR) switches providing low-latency networking between them. Every rack in the datacenter is interconnected, creating a tightly coupled cluster. From the outside, this architecture looks like many independent servers, but at scale it functions as a single supercomputer where hundreds of thousands of accelerators can train a single model in parallel.
This datacenter runs a single, massive cluster of interconnected NVIDIA GB200 servers and millions of compute cores and exabytes of storage, all engineered for the most demanding AI workloads. Azure was the first cloud provider to bring online the NVIDIA GB200 server, rack and full datacenter clusters. Each rack packs 72 NVIDIA Blackwell GPUs, tied together in a single NVLink domain that delivers 1.8 terabytes of GPU-to-GPU bandwidth and gives every GPU access to 14 terabytes of pooled memory. Rather than behaving like dozens of separate chips, the rack operates as a single, giant accelerator, capable of processing an astonishing 865,000 tokens per second, the highest throughput of any cloud platform available today. The Norway and UK AI datacenters will use similar clusters, and take advantage of NVIDIAs next AI chip design (GB300) which offers even more pooled memory per rack.
The challenge in establishing supercomputing scale, particularly as AI training requirements continue to require breakthrough scales of computing, is getting the networking topology just right. To ensure low latency communication across multiple layers in a cloud environment, Microsoft needed to extend performance beyond a single rack. For the latest NVIDIA GB200 and GB300 deployments globally, at the rack level these GPUs communicate over NVLink and NVSwitch at terabytes per second, collapsing memory and bandwidth barriers. Then to connect across multiple racks into a pod, Azure uses both InfiniBand and Ethernet fabrics that deliver 800 Gbps, in a full fat tree non-blocking architecture to ensure that every GPU can talk to every other GPU at full line rate without congestion. And across the datacenter, multiple pods of racks are interconnected to reduce hop counts and enable tens of thousands of GPUs to function as one global-scale supercomputer.
When laid out in a traditional datacenter hallway, physical distance between racks introduces latency into the system. To address this, the racks in the Wisconsin AI datacenter are laid out in a two-story datacenter configuration, so in addition to racks networked to adjacent racks, they are networked to additional racks above or below them.
This layered approach sets Azure apart. Microsoft Azure was not just the first cloud to bring GB200 online at rack and datacenter scale; we’re doing it at massive scale with customers today. By co-engineering the full stack with the best from our industry partners coupled with our own purpose-built systems, Microsoft has built the most powerful, tightly coupled AI supercomputer in the world, purpose-built for frontier models.

Addressing the environmental impact: closed loop liquid cooling at facility scale
Traditional air cooling can’t handle the density of modern AI hardware. Our datacenters use advanced liquid cooling systems — integrated pipes circulate cold liquid directly into servers, extracting heat efficiently. The closed-loop recirculation ensures zero water waste, with water only needed to fill up once and then it is continually reused.
By designing purpose-built AI datacenters, we were able to build liquid cooling infrastructure into the facility directly to get us more rack-density in the datacenter. Fairwater is supported by the second largest water-cooled chiller plant on the planet and will continuously circulate water in its closed loop cooling system. The hot water is then piped out to the cooling “fins” on each side of the datacenter, where 172 20-foot fans chill and recirculate the water back to the datacenter. This system keeps the AI datacenter running efficiently, even at peak loads.

Over 90% of our datacenter capacity uses this system, requiring water only once during construction and continually reusing it with no evaporation losses. The remaining 10% of traditional servers use outdoor air for cooling, switching to water only during the hottest days, a design that dramatically reduces water usage compared to traditional datacenters.
We’re also using liquid cooling to support AI workloads in many of our existing datacenters; this liquid cooling is accomplished with Heat Exchanger Units (HXUs) that also operate with zero-operational water use.
Storage and compute: Built for AI velocity
Modern datacenters can contain exabytes of storage and millions of CPU compute scores. To support the AI infrastructure cluster, an entirely separate datacenter infrastructure is needed to store and process the data used and generated by the AI cluster. To give you an example of the scale — the Wisconsin AI datacenter’s storage systems are five football fields in length!

We reengineered Azure storage for the most demanding AI workloads, across these massive datacenter deployments for true supercomputing scale. Each Azure Blob Storage account can sustain over 2 million read/write transactions per second, and with millions of accounts available, we can elastically scale to meet virtually any data requirement.
Behind this capability is a fundamentally rearchitected storage foundation that aggregates capacity and bandwidth across thousands of storage nodes and hundreds of thousands of drives. This enables scale to exabyte scale storage, eliminating the need for manual sharding and simplifying operations for even the largest AI and analytics workloads.
Key innovations such as BlobFuse2 deliver high-throughput, low-latency access for GPU node-local training, ensuring that compute resources are never idle and that massive AI training datasets are always available when needed. Multiprotocol support allows seamless integration with diverse data pipelines, while deep integration with analytics engines and AI tools accelerates data preparation and deployment.
Automatic scaling dynamically allocates resources as demand grows, combined with advanced security, resiliency and cost-effective tiered storage, Azure’s storage platform sets the pace for next-generation workloads, delivering the performance, scalability and reliability required.
AI WAN: Connecting multiple datacenters for an even larger AI supercomputer
These new AI datacenters are part of a global network of Azure AI datacenters, interconnected via our Wide Area Network (WAN). This isn’t just about one building, it’s about a distributed, resilient and scalable system that operates as a single, powerful AI machine. Our AI WAN is built with growth capabilities in AI-native bandwidth scales to enable large-scale distributed training across multiple, geographically diverse Azure regions, thus allowing customers to harness the power of a giant AI supercomputer.
This is a fundamental shift in how we think about AI supercomputers. Instead of being limited by the walls of a single facility, we’re building a distributed system where compute, storage and networking resources are seamlessly pooled and orchestrated across datacenter regions. This means greater resiliency, scalability and flexibility for customers.
Bringing it all together
To meet the critical needs of the largest AI challenges, we needed to redesign every layer of our cloud infrastructure stack. This isn’t just about isolated breakthroughs, but composing multiple new approaches across silicon, servers, networks and datacenters, leading to advancements where software and hardware are optimized as one purpose-built system.
Microsoft’s Wisconsin datacenter will play a critical role in the future of AI, built on real technology, real investment and real community impact. As we connect this facility with other regional datacenters, and as every layer of our infrastructure is harmonized as a complete system, we’re unleashing a new era of cloud-powered intelligence, secure, adaptive and ready for what’s next.
To learn more about Microsoft’s datacenter innovations, check out the virtual datacenter tour at datacenters.microsoft.com.
Scott Guthrie is responsible for hyperscale cloud computing solutions and services including Azure, Microsoft’s cloud computing platform, generative AI solutions, data platforms and information and cybersecurity. These platforms and services help organizations worldwide solve urgent challenges and drive long-term transformation.
The post Inside the world’s most powerful AI datacenter appeared first on The Official Microsoft Blog.
This week we have introduced a wave of purpose-built datacenters and infrastructure investments we are making around the world to support the global adoption of cutting-edge AI workloads and cloud services. Today in Wisconsin we introduced Fairwater, our newest US AI datacenter, the largest and most sophisticated AI factory we’ve built yet. In addition to…
The post Inside the world’s most powerful AI datacenter appeared first on The Official Microsoft Blog.Read More
Closed contour with least square differences to dataset
Trying to reproduce a data model where both X and Y values in a dataset are summarised using the fourier parameters of a closed loop (FPCC) – but the fit function only works with one value of Y for one value of X (whereas a closed loop would have multiple values of Y for one value of X). Does anyone have ideas on how to approach this?Trying to reproduce a data model where both X and Y values in a dataset are summarised using the fourier parameters of a closed loop (FPCC) – but the fit function only works with one value of Y for one value of X (whereas a closed loop would have multiple values of Y for one value of X). Does anyone have ideas on how to approach this? Trying to reproduce a data model where both X and Y values in a dataset are summarised using the fourier parameters of a closed loop (FPCC) – but the fit function only works with one value of Y for one value of X (whereas a closed loop would have multiple values of Y for one value of X). Does anyone have ideas on how to approach this? fourier, closed contour MATLAB Answers — New Questions
Can I use google colab for running matlab codes
I am working on a deep learning code using vgg16, but I am facing difficulties in excuting it on my computer as it takes a long time. So I tried using google colab but it did not work :(I am working on a deep learning code using vgg16, but I am facing difficulties in excuting it on my computer as it takes a long time. So I tried using google colab but it did not work 🙁 I am working on a deep learning code using vgg16, but I am facing difficulties in excuting it on my computer as it takes a long time. So I tried using google colab but it did not work 🙁 deep learning, classification MATLAB Answers — New Questions
Unit test with file path parameter – improve formatting in PDF report
I am relatively new to MATLAB and have a question about generating test reports.
I am working with a class-based unit test that takes a file path as a parameter. These file paths can become quite long. The default presentation of test results using
table(results)
is rather unsatisfactory.
Therefore, I wrote a custom
TestRunnerPlugin
that produces a nicer output on the command line – in particular, the file paths are displayed correctly and in full.
For the user of the unit test, it is crucial to know exactly which file was used when calling the test. This makes a clear and readable formatting of the file paths very important.
My question: How can I achieve the same improved output in the generated PDF report?
Attached is a complete minimal example:
ModuleTest provides two test functions.
Wrapper.m performs the test execution and result output.
MyPlugin.m adjusts the command-line output at the end of the tests.
Wrapper.m
import matlab.unittest.TestSuite;
import matlab.unittest.TestRunner;
import matlab.unittest.plugins.TestReportPlugin;
runner = TestRunner.withNoPlugins;
%for custom command line output
runner.addPlugin(MyPlugin);
%for pdf report
pdfFile = ‘TestReport.pdf’;
plugin = TestReportPlugin.producingPDF(pdfFile);
runner.addPlugin(plugin);
suite = TestSuite.fromClass(?ModuleTest);
results = runner.run(suite);
%standard command line output
table(results)
classdef ModuleTest < matlab.unittest.TestCase
properties (TestParameter)
file = {‘C:scAppsMatlabR2023B_64sysFastDDSwin64includefastcdrexceptionsException.h’,…
‘C:scAppsMatlabR2023B_64sysjavajrewin64jrebindtplugindeployJava1.dll’};
end
methods(Test)
function testFuncA(tc, file)
tc.verifyEqual(isfile(file), fileExists(file));
end
function testFuncB(tc, file)
tc.verifyEqual(~isfile(file), fileExists(file));
end
end
end
classdef MyPlugin < matlab.unittest.plugins.TestRunnerPlugin
methods (Access=protected)
function reportFinalizedSuite(plugin,pluginData)
disp(‘### Test Results ###’);
for i=1:numel(pluginData.TestResult)
thisResult = pluginData.TestResult(i);
if thisResult.Passed
status = ‘PASSED’;
elseif thisResult.Failed
status = ‘FAILED’;
elseif thisResult.Incomplete
status = ‘SKIPPED’;
end
%Obtain name of the test
parts = split(thisResult.Name, ‘/’);
testName = parts{end};
testName = erase(testName, regexp(testName, ‘(.*)’, ‘match’));
%Parameter array, each element has the fileds Name, Property, Value
params = pluginData.TestSuite(i).Parameterization;
%% extract information from the test
% file
selection = arrayfun(@(x) strcmp(x.Property, ‘file’), params);
file = params(selection).Value;
fprintf(‘%s: %s in %f seconds. Test configuration: file %s.n’, testName, status, thisResult.Duration, file);
end
disp(‘### Test Results ###’);
reportFinalizedSuite@ …
matlab.unittest.plugins.TestRunnerPlugin(plugin,pluginData);
end
end
end
Below you can see a direct comparison between the modified output with correct file paths and the hard-to-read default output
My main concern is the formatting of file paths in the generated test report. Currently, long paths are truncated and not displayed correctly, which makes it difficult to see which file was actually used.
Similar to how I already customized the command line output with a plugin, I’d like to apply the same kind of adjustment to the PDF test report so that the full paths are shown clearly and in a readable format.I am relatively new to MATLAB and have a question about generating test reports.
I am working with a class-based unit test that takes a file path as a parameter. These file paths can become quite long. The default presentation of test results using
table(results)
is rather unsatisfactory.
Therefore, I wrote a custom
TestRunnerPlugin
that produces a nicer output on the command line – in particular, the file paths are displayed correctly and in full.
For the user of the unit test, it is crucial to know exactly which file was used when calling the test. This makes a clear and readable formatting of the file paths very important.
My question: How can I achieve the same improved output in the generated PDF report?
Attached is a complete minimal example:
ModuleTest provides two test functions.
Wrapper.m performs the test execution and result output.
MyPlugin.m adjusts the command-line output at the end of the tests.
Wrapper.m
import matlab.unittest.TestSuite;
import matlab.unittest.TestRunner;
import matlab.unittest.plugins.TestReportPlugin;
runner = TestRunner.withNoPlugins;
%for custom command line output
runner.addPlugin(MyPlugin);
%for pdf report
pdfFile = ‘TestReport.pdf’;
plugin = TestReportPlugin.producingPDF(pdfFile);
runner.addPlugin(plugin);
suite = TestSuite.fromClass(?ModuleTest);
results = runner.run(suite);
%standard command line output
table(results)
classdef ModuleTest < matlab.unittest.TestCase
properties (TestParameter)
file = {‘C:scAppsMatlabR2023B_64sysFastDDSwin64includefastcdrexceptionsException.h’,…
‘C:scAppsMatlabR2023B_64sysjavajrewin64jrebindtplugindeployJava1.dll’};
end
methods(Test)
function testFuncA(tc, file)
tc.verifyEqual(isfile(file), fileExists(file));
end
function testFuncB(tc, file)
tc.verifyEqual(~isfile(file), fileExists(file));
end
end
end
classdef MyPlugin < matlab.unittest.plugins.TestRunnerPlugin
methods (Access=protected)
function reportFinalizedSuite(plugin,pluginData)
disp(‘### Test Results ###’);
for i=1:numel(pluginData.TestResult)
thisResult = pluginData.TestResult(i);
if thisResult.Passed
status = ‘PASSED’;
elseif thisResult.Failed
status = ‘FAILED’;
elseif thisResult.Incomplete
status = ‘SKIPPED’;
end
%Obtain name of the test
parts = split(thisResult.Name, ‘/’);
testName = parts{end};
testName = erase(testName, regexp(testName, ‘(.*)’, ‘match’));
%Parameter array, each element has the fileds Name, Property, Value
params = pluginData.TestSuite(i).Parameterization;
%% extract information from the test
% file
selection = arrayfun(@(x) strcmp(x.Property, ‘file’), params);
file = params(selection).Value;
fprintf(‘%s: %s in %f seconds. Test configuration: file %s.n’, testName, status, thisResult.Duration, file);
end
disp(‘### Test Results ###’);
reportFinalizedSuite@ …
matlab.unittest.plugins.TestRunnerPlugin(plugin,pluginData);
end
end
end
Below you can see a direct comparison between the modified output with correct file paths and the hard-to-read default output
My main concern is the formatting of file paths in the generated test report. Currently, long paths are truncated and not displayed correctly, which makes it difficult to see which file was actually used.
Similar to how I already customized the command line output with a plugin, I’d like to apply the same kind of adjustment to the PDF test report so that the full paths are shown clearly and in a readable format. I am relatively new to MATLAB and have a question about generating test reports.
I am working with a class-based unit test that takes a file path as a parameter. These file paths can become quite long. The default presentation of test results using
table(results)
is rather unsatisfactory.
Therefore, I wrote a custom
TestRunnerPlugin
that produces a nicer output on the command line – in particular, the file paths are displayed correctly and in full.
For the user of the unit test, it is crucial to know exactly which file was used when calling the test. This makes a clear and readable formatting of the file paths very important.
My question: How can I achieve the same improved output in the generated PDF report?
Attached is a complete minimal example:
ModuleTest provides two test functions.
Wrapper.m performs the test execution and result output.
MyPlugin.m adjusts the command-line output at the end of the tests.
Wrapper.m
import matlab.unittest.TestSuite;
import matlab.unittest.TestRunner;
import matlab.unittest.plugins.TestReportPlugin;
runner = TestRunner.withNoPlugins;
%for custom command line output
runner.addPlugin(MyPlugin);
%for pdf report
pdfFile = ‘TestReport.pdf’;
plugin = TestReportPlugin.producingPDF(pdfFile);
runner.addPlugin(plugin);
suite = TestSuite.fromClass(?ModuleTest);
results = runner.run(suite);
%standard command line output
table(results)
classdef ModuleTest < matlab.unittest.TestCase
properties (TestParameter)
file = {‘C:scAppsMatlabR2023B_64sysFastDDSwin64includefastcdrexceptionsException.h’,…
‘C:scAppsMatlabR2023B_64sysjavajrewin64jrebindtplugindeployJava1.dll’};
end
methods(Test)
function testFuncA(tc, file)
tc.verifyEqual(isfile(file), fileExists(file));
end
function testFuncB(tc, file)
tc.verifyEqual(~isfile(file), fileExists(file));
end
end
end
classdef MyPlugin < matlab.unittest.plugins.TestRunnerPlugin
methods (Access=protected)
function reportFinalizedSuite(plugin,pluginData)
disp(‘### Test Results ###’);
for i=1:numel(pluginData.TestResult)
thisResult = pluginData.TestResult(i);
if thisResult.Passed
status = ‘PASSED’;
elseif thisResult.Failed
status = ‘FAILED’;
elseif thisResult.Incomplete
status = ‘SKIPPED’;
end
%Obtain name of the test
parts = split(thisResult.Name, ‘/’);
testName = parts{end};
testName = erase(testName, regexp(testName, ‘(.*)’, ‘match’));
%Parameter array, each element has the fileds Name, Property, Value
params = pluginData.TestSuite(i).Parameterization;
%% extract information from the test
% file
selection = arrayfun(@(x) strcmp(x.Property, ‘file’), params);
file = params(selection).Value;
fprintf(‘%s: %s in %f seconds. Test configuration: file %s.n’, testName, status, thisResult.Duration, file);
end
disp(‘### Test Results ###’);
reportFinalizedSuite@ …
matlab.unittest.plugins.TestRunnerPlugin(plugin,pluginData);
end
end
end
Below you can see a direct comparison between the modified output with correct file paths and the hard-to-read default output
My main concern is the formatting of file paths in the generated test report. Currently, long paths are truncated and not displayed correctly, which makes it difficult to see which file was actually used.
Similar to how I already customized the command line output with a plugin, I’d like to apply the same kind of adjustment to the PDF test report so that the full paths are shown clearly and in a readable format. unit test, test report, testrunner, plugin, report customization, file paths MATLAB Answers — New Questions
Windows dark mode problem in app designer
I recently updated MATLAB to 2025 and made a new standalone app version of my program again.
My problem is that everything in my program is now either the normal color (grey) or black and I found out it’s related to my windows being in dark mode.
I can see that I haven’t put the same type of background color on everything originally, so some things have a background color and some things don’t have, and the things that don’t have has now changed to black.
So I started going through all the code and all the components and changing everything to have no background color in hopes that my program will accommodate dark mode.
The problem is that the radio buttons, I can’t seem to get them to work in dark mode. Am I overlooking something here or can someone help me? They dont have the background color?
My last question is, is there a way I could completely just tell that my program should be run in light mode always so I don’t have to change everything in it to accommodate dark mode?
Thanks in advance. JohanI recently updated MATLAB to 2025 and made a new standalone app version of my program again.
My problem is that everything in my program is now either the normal color (grey) or black and I found out it’s related to my windows being in dark mode.
I can see that I haven’t put the same type of background color on everything originally, so some things have a background color and some things don’t have, and the things that don’t have has now changed to black.
So I started going through all the code and all the components and changing everything to have no background color in hopes that my program will accommodate dark mode.
The problem is that the radio buttons, I can’t seem to get them to work in dark mode. Am I overlooking something here or can someone help me? They dont have the background color?
My last question is, is there a way I could completely just tell that my program should be run in light mode always so I don’t have to change everything in it to accommodate dark mode?
Thanks in advance. Johan I recently updated MATLAB to 2025 and made a new standalone app version of my program again.
My problem is that everything in my program is now either the normal color (grey) or black and I found out it’s related to my windows being in dark mode.
I can see that I haven’t put the same type of background color on everything originally, so some things have a background color and some things don’t have, and the things that don’t have has now changed to black.
So I started going through all the code and all the components and changing everything to have no background color in hopes that my program will accommodate dark mode.
The problem is that the radio buttons, I can’t seem to get them to work in dark mode. Am I overlooking something here or can someone help me? They dont have the background color?
My last question is, is there a way I could completely just tell that my program should be run in light mode always so I don’t have to change everything in it to accommodate dark mode?
Thanks in advance. Johan standalone app, windows dark mode, radio buttons MATLAB Answers — New Questions
What’s the Best Way to Manage Guest Accounts?
Home-Brewed PowerShell or Microsoft Solutions for Guest Account Management
A recent podcast from the genial Merill Fernando featured Microsoft’s Jeremy Conley to talk about “how to really govern guest access.” The tagline “many tenants have 2-4x more guests than employees” captures the focus of the episode (a good listen) and while many organizations might not believe that guest accounts are quite so numerous in their Microsoft 365 tenants, the simple fact is that it’s all too easy to accumulate a vast collection of guests.
Microsoft 365 is responsible for this sad state of affairs. I started talking about the problems of guest accounts “going bad” (aging) soon after the introduction of Azure AD guest accounts for Office 365 groups in 2016 and the situation hasn’t improved much since. Things really took off with the introduction of Teams in 2017 and later, the adoption of guest accounts by SharePoint Online as the basis for sharing. My basic recommendation has always been to review guest accounts annually with the aim of removing unused guests.
Use Entra Governance or PowerShell for Guest Account Management
Microsoft has solutions to help, but only if organizations invest in Entra P2 licenses (naturally) to liberate ID governance features like lifecycle management and access reviews. If you can afford the licenses, you should certainly investigate using lifecycle management and access reviews to control guest accounts. But you don’t need to spend any money on additional licenses because controlling guest accounts is a reasonably straightforward task using PowerShell. Let’s discuss some of the tactics that tenants could adopt for guest management.
First, Microsoft doesn’t implement an expiration date for guest accounts, but this is easily done by assigning an expiration date to guest accounts and using that date as the basis for checking if guest accounts are still needed.
For any type of guest account management, it’s a good idea to review guest sign-in activity. If a guest account doesn’t sign into a tenant within a certain period (say, 90 days), it’s probably obsolete and can be removed.
Entra ID supports the concepts of account sponsorship. In other words, one or more sponsor accounts can be associated with member or guest accounts. Sponsors are not assigned by default, but setting a default sponsor is easily done for guest accounts. The problem with default sponsors is that the selected account might not have any insight into how a guest account is used, but a default sponsor is better than none, and the lack of activity should always be the primary reason for considering an account to be inactive and a candidate for removal.

Sponsors are supposed to know why an account exists, so if a guest account is deemed obsolete due to lack of sign-in activity, you can report this fact and use the report data to contact the sponsors to ask if accounts should be removed or kept.
The Need to Nag Sponsors
One thing I haven’t done yet is to send nagging email to account sponsors to say that their sponsored guest accounts will be automatically removed in a week or so if they don’t reply with a justification for keeping the accounts. This is a good example of where a scheduled Azure Automation runbook is a good choice to run the code to check for obsolete guest accounts and email the account sponsors. I must write that script!
No one wants to remove guest accounts that are required for business purposes. Teams is probably the best example of where important guest accounts that appear underused might exist. I’ve documented five practical actions to manage guest accounts used with Teams topic in this article. Enforcing multifactor authentication for guest accounts through a conditional access policy is a critical step.
Act to Make Sure Your Tenant Implements Guest Account Management
Whether you decide to manage guest accounts using your own code or with Microsoft’s solutions really doesn’t matter. The important thing is to manage guest accounts, especially in terms of a regular clean-out of obsolete accounts. Insisting on multifactor authentication removes most of any security risk associated with having some underused guest accounts in Entra ID, but who doesn’t like a clean directory?
Need some assistance to write and manage PowerShell scripts for Microsoft 365, including Azure Automation runbooks? Get a copy of the Automating Microsoft 365 with PowerShell eBook, available standalone or as part of the Office 365 for IT Pros eBook bundle.
Wrong volume and level from tank (TL)
Hello
I am modelling hot oil system in simscape where expansion tank is modelled as Tank (TL).
I have specified maximum volume as 5.6 m^3 but when I run the simulation, I get the volume of fluid in the tank more than 5.6 m3. why is it so?Hello
I am modelling hot oil system in simscape where expansion tank is modelled as Tank (TL).
I have specified maximum volume as 5.6 m^3 but when I run the simulation, I get the volume of fluid in the tank more than 5.6 m3. why is it so? Hello
I am modelling hot oil system in simscape where expansion tank is modelled as Tank (TL).
I have specified maximum volume as 5.6 m^3 but when I run the simulation, I get the volume of fluid in the tank more than 5.6 m3. why is it so? volume, tank (tl), simscape MATLAB Answers — New Questions
How to model a vertically mounted gas-sprung hydraulic height-controlled piston in Simscape?
I’m trying to model a vertically mounted piston system in Simscape where the piston supports a variable weight (parameter-controlled). The system consists of a gas-sprung hydraulic actuator that uses a floating piston to separate the gas volume from the hydraulic oil volume. The purpose is to maintain a pressure-balanced condition to support the load while allowing controlled height adjustment.
Key features of the system:
The piston is mounted vertically to the ground.
The upper chamber contains compressible gas (e.g., nitrogen), acting as a spring.
The lower chamber contains hydraulic oil.
A floating piston separates the gas and oil, maintaining pressure equilibrium.
The supported load is a variable parameter.
The system includes height (position) control of the main piston.
I need guidance on:
How best to model the gas/oil separation with a floating piston in Simscape.
Incorporating the gas spring behavior.
Implementing height control in the model.
Handling the variable load and its interaction with the system.
Any help, component suggestions, or example models would be greatly appreciated.I’m trying to model a vertically mounted piston system in Simscape where the piston supports a variable weight (parameter-controlled). The system consists of a gas-sprung hydraulic actuator that uses a floating piston to separate the gas volume from the hydraulic oil volume. The purpose is to maintain a pressure-balanced condition to support the load while allowing controlled height adjustment.
Key features of the system:
The piston is mounted vertically to the ground.
The upper chamber contains compressible gas (e.g., nitrogen), acting as a spring.
The lower chamber contains hydraulic oil.
A floating piston separates the gas and oil, maintaining pressure equilibrium.
The supported load is a variable parameter.
The system includes height (position) control of the main piston.
I need guidance on:
How best to model the gas/oil separation with a floating piston in Simscape.
Incorporating the gas spring behavior.
Implementing height control in the model.
Handling the variable load and its interaction with the system.
Any help, component suggestions, or example models would be greatly appreciated. I’m trying to model a vertically mounted piston system in Simscape where the piston supports a variable weight (parameter-controlled). The system consists of a gas-sprung hydraulic actuator that uses a floating piston to separate the gas volume from the hydraulic oil volume. The purpose is to maintain a pressure-balanced condition to support the load while allowing controlled height adjustment.
Key features of the system:
The piston is mounted vertically to the ground.
The upper chamber contains compressible gas (e.g., nitrogen), acting as a spring.
The lower chamber contains hydraulic oil.
A floating piston separates the gas and oil, maintaining pressure equilibrium.
The supported load is a variable parameter.
The system includes height (position) control of the main piston.
I need guidance on:
How best to model the gas/oil separation with a floating piston in Simscape.
Incorporating the gas spring behavior.
Implementing height control in the model.
Handling the variable load and its interaction with the system.
Any help, component suggestions, or example models would be greatly appreciated. simscape, piston, hydraulic MATLAB Answers — New Questions
I can’t seem to connect to my account to update my license
I need to update my license, but I am unable to connect to my account. It won’t wven let me re set my password.
I received a message that there is "suspicious activity" and I should contact support.I need to update my license, but I am unable to connect to my account. It won’t wven let me re set my password.
I received a message that there is "suspicious activity" and I should contact support. I need to update my license, but I am unable to connect to my account. It won’t wven let me re set my password.
I received a message that there is "suspicious activity" and I should contact support. update license, re-set password, "suspicious activity" message MATLAB Answers — New Questions
Automatic reenabling and disabling statflow chart
I have a simple state machine that I would like to disable automatically via Enable_SM once all states within the Stateflow chart are executed (done). After the state machine has finished execution, I want the toggle switch to switch off automatically.
Then, I would like to be able to start the state machine manually again using the toggle switch.
Can you please advise me how to implement this?I have a simple state machine that I would like to disable automatically via Enable_SM once all states within the Stateflow chart are executed (done). After the state machine has finished execution, I want the toggle switch to switch off automatically.
Then, I would like to be able to start the state machine manually again using the toggle switch.
Can you please advise me how to implement this? I have a simple state machine that I would like to disable automatically via Enable_SM once all states within the Stateflow chart are executed (done). After the state machine has finished execution, I want the toggle switch to switch off automatically.
Then, I would like to be able to start the state machine manually again using the toggle switch.
Can you please advise me how to implement this? automatic reenabling and disabling stateflow chart MATLAB Answers — New Questions
MATLAB won’t invert a function in the Laplace domain…
We are using a fairly simple function of the gamma distribution. When I use whole numbers for the "n" parameter, MATLAB inverts the function well. However, when I use non-integer values for n (e.g., n=11/10 or 1.1), MATLAB is unable to invert the function.
Here is the error message:
Warning: Error in state of SceneNode. The following error was reported evaluating the function in FunctionLine update: Unable to convert symbolic expression to double array because it contains symbolic function that does not evaluate to number. Input expression must evaluate to number.
I want to be able to use various values of n that are not integers.
Here is my code.
Do I need to use a numerical inversion method? or is there a simple means of fixing this withing MATLAB?
% Clear all variables and close all plots.
clc
close all;
clear all;
% Declare the variables that are symbolic.
syms t s N n m r t_close tbar
syms E_t E_s Day_hours
syms C_a_s C_a_t Fun_s
% Define the values of the model parameters
tbar=8; % tbar is one of the gamma distribution parameters.
n=1; % n is the second of two parameters needed to
% define the gamma distribution shape. set n=1.1, and it
% does not work.
Day_length=24;
r=0.5;
t_close=8;
m=sym(tbar/t_close);
% N is the number of cycles for which we
% want this to run. For now, one cycle is
% enough to determine whether this will work.
for N=1
% Define the gamma distribution
E_t(N)=((1/tbar)*(((t)/tbar)^(n-1))*((n^n)/gamma(n))*exp(-n*(t)/tbar));
% Transform the gamma distribution into the Laplace domain
E_s(N)=laplace(E_t(N));
% Define a function that is a result of a derivation
Fun_s(N)=(exp(-Day_length*(N-1)*s))/s-exp((-Day_length*(N-1)-t_close)*s)/s;
% Define the concentration function in the Laplace domain
C_a_s(N)=(m/(tbar*s))*((1-E_s(N))/(1-r*E_s(N)))*Fun_s(N);
% Now, take the inverse laplace of the concentration function to
% put it in the time domain.
% if n is not an integer, it won’t invert the function.
% any ideas???
C_a_t(N) = ilaplace(C_a_s(N),s,t)
fplot(C_a_t(N),[0 60])
endWe are using a fairly simple function of the gamma distribution. When I use whole numbers for the "n" parameter, MATLAB inverts the function well. However, when I use non-integer values for n (e.g., n=11/10 or 1.1), MATLAB is unable to invert the function.
Here is the error message:
Warning: Error in state of SceneNode. The following error was reported evaluating the function in FunctionLine update: Unable to convert symbolic expression to double array because it contains symbolic function that does not evaluate to number. Input expression must evaluate to number.
I want to be able to use various values of n that are not integers.
Here is my code.
Do I need to use a numerical inversion method? or is there a simple means of fixing this withing MATLAB?
% Clear all variables and close all plots.
clc
close all;
clear all;
% Declare the variables that are symbolic.
syms t s N n m r t_close tbar
syms E_t E_s Day_hours
syms C_a_s C_a_t Fun_s
% Define the values of the model parameters
tbar=8; % tbar is one of the gamma distribution parameters.
n=1; % n is the second of two parameters needed to
% define the gamma distribution shape. set n=1.1, and it
% does not work.
Day_length=24;
r=0.5;
t_close=8;
m=sym(tbar/t_close);
% N is the number of cycles for which we
% want this to run. For now, one cycle is
% enough to determine whether this will work.
for N=1
% Define the gamma distribution
E_t(N)=((1/tbar)*(((t)/tbar)^(n-1))*((n^n)/gamma(n))*exp(-n*(t)/tbar));
% Transform the gamma distribution into the Laplace domain
E_s(N)=laplace(E_t(N));
% Define a function that is a result of a derivation
Fun_s(N)=(exp(-Day_length*(N-1)*s))/s-exp((-Day_length*(N-1)-t_close)*s)/s;
% Define the concentration function in the Laplace domain
C_a_s(N)=(m/(tbar*s))*((1-E_s(N))/(1-r*E_s(N)))*Fun_s(N);
% Now, take the inverse laplace of the concentration function to
% put it in the time domain.
% if n is not an integer, it won’t invert the function.
% any ideas???
C_a_t(N) = ilaplace(C_a_s(N),s,t)
fplot(C_a_t(N),[0 60])
end We are using a fairly simple function of the gamma distribution. When I use whole numbers for the "n" parameter, MATLAB inverts the function well. However, when I use non-integer values for n (e.g., n=11/10 or 1.1), MATLAB is unable to invert the function.
Here is the error message:
Warning: Error in state of SceneNode. The following error was reported evaluating the function in FunctionLine update: Unable to convert symbolic expression to double array because it contains symbolic function that does not evaluate to number. Input expression must evaluate to number.
I want to be able to use various values of n that are not integers.
Here is my code.
Do I need to use a numerical inversion method? or is there a simple means of fixing this withing MATLAB?
% Clear all variables and close all plots.
clc
close all;
clear all;
% Declare the variables that are symbolic.
syms t s N n m r t_close tbar
syms E_t E_s Day_hours
syms C_a_s C_a_t Fun_s
% Define the values of the model parameters
tbar=8; % tbar is one of the gamma distribution parameters.
n=1; % n is the second of two parameters needed to
% define the gamma distribution shape. set n=1.1, and it
% does not work.
Day_length=24;
r=0.5;
t_close=8;
m=sym(tbar/t_close);
% N is the number of cycles for which we
% want this to run. For now, one cycle is
% enough to determine whether this will work.
for N=1
% Define the gamma distribution
E_t(N)=((1/tbar)*(((t)/tbar)^(n-1))*((n^n)/gamma(n))*exp(-n*(t)/tbar));
% Transform the gamma distribution into the Laplace domain
E_s(N)=laplace(E_t(N));
% Define a function that is a result of a derivation
Fun_s(N)=(exp(-Day_length*(N-1)*s))/s-exp((-Day_length*(N-1)-t_close)*s)/s;
% Define the concentration function in the Laplace domain
C_a_s(N)=(m/(tbar*s))*((1-E_s(N))/(1-r*E_s(N)))*Fun_s(N);
% Now, take the inverse laplace of the concentration function to
% put it in the time domain.
% if n is not an integer, it won’t invert the function.
% any ideas???
C_a_t(N) = ilaplace(C_a_s(N),s,t)
fplot(C_a_t(N),[0 60])
end inversion of a laplace transform MATLAB Answers — New Questions
Entra ID’s Keep Me Signed In Feature – Good or Bad?
Should Microsoft 365 Tenants Disable Keep Me Signed In?
When I wrote about the Entra ID Keep Me Signed In (KMSI) feature in February 2022, I concluded that growing threats might have made the feature less valuable than it once was. Like anything to do with Microsoft 365, the passing of time requires re-evaluation of attitudes and opinions, and this is true for KMSI too. Here’s my best attempt at summarizing the current state of the art.
Recapping How the Keep Me Signed In Feature Works
As a recap, KMSI is the option presented to users after they authenticate to “stay signed in” to reduce the number of times Entra ID forces the user to sign in. If the user chooses to stay signed in by choosing the Yes option (Figure 1), Entra ID creates a persistent authentication cookie that can last for up to 90 days (as opposed to 24 hours, which is the lifetime of a non-persistent cookie). With a persistent autentication cookie available, the user can connect to applications without signing in for the lifetime of the cookie. Because the cookie is persistent, it doesn’t matter if the browser session is restarted.

The Don’t show this again checkbox has nothing to do with the creation of the persistent authentication cookie. The checkbox controls whether Entra displays the prompt on the device for future sign-ins.
Obviously, a persistent authentication cookie is a bad idea if workstations are shared, but when workstations are personal and only used by a single person, keep me signed in is a nice way to reduce the friction of signing in. In fact, the Entra ID sign-in flow contains some logic to detect if a sign-in originates from a shared device and won’t show the stay signed in screen in this case. The same is true if Entra ID considers a sign-on to be high risk.
Clearing browser cookies on a workstation will remove the persistent authentication cookie.
Conditional Policies and Sign-in Frequency
Conditional access policies can interfere with the operation of persistent authentication cookies. If a conditional access policy insists that users reauthenticate based on a certain frequency, the full authentication process is invoked, and users must provide credentials. Some tenants impose unreasonable demands on users (or just guest accounts) and insist on very frequent authentication, so it’s a matter of achieving balance between annoying users and maintaining the desired level of security.
Considering the Question of Enabling Keep Me Signed In
All of which brings me back to the question of whether Microsoft 365 tenants should enable or disable KMSI. Generally speaking, I don’t see anything wrong with KMSI when the following conditions are true:
- People use personal rather than shared workstations. Authentication processing for people who use shared workstations can be controlled by specific conditional access policies.
- Strong multi-factor authentication is in place to ensure that the initial authentication is secure and is unlikely to be compromised by external attackers. In other words, use the Microsoft authenticator app or passkeys.
- Conditional access policies are in place to impose a reasonable sign-in frequency. Monthly seems about right. After using a weekly frequency for the last few years (for one tenant that I access frequently as a guest), I think this interval creates too much friction.
As always, the first order of business is to prevent user accounts being compromised. If an account is not compromised, KMSI is unlikely to cause a problem. The widespread adoption of continuous access evaluation by Microsoft 365 workloads makes closing off compromised account access easier, but that’s no excuse to avoid deploying strong multifactor authentication everywhere to protect every Microsoft 365 account.
Configuring Keep Me Signed In
To configure KMSI for everyone in a tenant, use the checkbox in User settings in the Entra admin center (Figure 2). KMSI is either enabled or disabled. It can’t be enabled for a specific group of users and disabled for everyone else.

KMSI is Fine in the Right Conditions
Microsoft 365 users have enough on their plate to cope with the ongoing and constant change in the apps they use daily. Reducing friction from sign-ins through features like KMSI seems like a good idea, providing it can be done securely and doesn’t compromise the tenant. Deploying strong multifactor authentication and effective conditional access policies go a long way to establishing the right conditions for KMSI. But if your tenant is open to compromise because it still uses single factor authentication (passwords) or lets people use weak multifactor authentication methods, don’t blame KMSI when you are compromised. At that point, persistent authentication cookies are the least of your worries.
So much change, all the time. It’s a challenge to stay abreast of all the updates Microsoft makes across the Microsoft 365 ecosystem. Subscribe to the Office 365 for IT Pros eBook to receive insights updated monthly into what happens within Microsoft 365, why it happens, and what new features and capabilities mean for your tenant.
Microsoft leads shift beyond data unification to organization, delivering next-gen AI readiness with new Microsoft Fabric capabilities
We’re in a hinge moment for AI. The experiments are over and the real work has begun. Centralizing data, once the finish line, is now the starting point. The definition of “AI readiness” is evolving as increasingly sophisticated agents demand rich, contextualized data grounded in business operations to deliver meaningful results. What sets leaders apart is the quality of the data platform experience in delivering on the shared meaning, live context and interactivity that helps systems understand the business as it is, not just as a static report. Across industries, frontier firms are dissolving silos and equipping teams with AI agents and reasoning systems that go beyond answers to help people build, explore, decide and act. The result: a new rhythm of work that’s faster, more connected, more explainable and closer to the customer.
Microsoft Fabric: Powering AI‑Ready data innovation enterprise‑wide at FabCon Europe
As the first hyperscaler to fully embrace this paradigm, Microsoft is introducing new capabilities in its fastest-growing data and analytics platform, Microsoft Fabric, at the European Microsoft Fabric Community Conference (FabCon). With Fabric, we are bringing together all of an organization’s data into a single, AI‑ready foundation so every team can turn data into actionable insight with the full context of their business. At FabCon, Microsoft is announcing a major leap forward in its delivery of AI data readiness with Graph in Fabric, a low/no-code platform for modeling and analyzing relationships across enterprise data; and Maps in Fabric, which joins the recently launched digital twin builder in Microsoft Fabric as part of Real-Time Intelligence and brings geospatial analytics into Fabric, enabling users to visualize and enrich location-based data at scale.
We’re also expanding Fabric’s capabilities further with new OneLake shortcuts and mirroring sources, a Graph database connecting entities across OneLake, enhanced developer experiences and new security controls — providing everything needed to run mission-critical scenarios on Fabric.
These capabilities mark a fundamental evolution in data strategy for business leaders scaling intelligent AI applications and agents across their organizations.
Train smarter agents with Graph and Maps
The foundation of every successful AI agent isn’t just data — it’s organized knowledge. As businesses accelerate into the AI era, the challenge isn’t gathering more information, but structuring it so agents can reason, connect and act with purpose.
The previews of Graph and Maps in Fabric are designed to help businesses organize their raw data for real-world impact. Graph in Fabric draws on the graph design principles proven at LinkedIn to reveal connections across customers, partners and supply chains, enabling organizations to visualize and query relationships that drive business outcomes.
Maps in Fabric brings geospatial analytics, empowering teams to make location-aware decisions as they respond to operational challenges in real time.
But these aren’t just technical milestones, they’re strategic tools for business leaders. AI is sparking new cross-company collaboration by connecting enterprise data — uniting business functions, accelerating decisions and empowering teams to share and scale value through open data flow. Whether it’s mapping supply chain dependencies or visualizing customer journeys, Graph and Maps help businesses move from isolated data points to a connected, actionable foundation for AI.
Discover how Graph and Maps in Fabric unlock real-time intelligence for AI-driven operations. Get the engineering inside scoop from Corporate Vice President of Messaging and Real-Time Analytics, Yitzhak Kesselman, in his latest blog: “The Foundation for Powering AI-Driven Operations.”
Enhancing developer experiences across Fabric to accelerate AI projects
Fabric is quickly becoming the go-to platform for data developers worldwide. To fuel that momentum, we’re rolling out new tools that make it easier to build, automate and innovate.
The new Fabric Extensibility Toolkit simplifies architecture and automation — so every solution is secure, scalable and aligned to business needs. And with the preview of Fabric Model Context Protocol (MCP) developers can tap into AI-assisted code generation and item authoring right inside familiar environments like Visual Studio Code and GitHub Codespaces.
These updates aren’t just for software developers. They’re for any business leader ready to turn organized data into competitive advantage. Fabric helps teams move from experimentation to enterprise-scale impact, with speed and governance built in.
OneLake: The AI-Ready data foundation
OneLake is the unified data lake at the heart of Fabric. It’s designed to ingest data once and make it instantly usable across analytics, AI and applications to accelerate insight. Today, we’re introducing new features to give teams unprecedented visibility and control with OneLake.
With the addition of mirroring capabilities for Oracle and Google BigQuery, expanded support for data agents and OneLake shortcuts to Azure Blob Storage, organizations can bring all their data together, no matter where it lives.
OneLake shortcut transformations can now convert JSON and Parquet files to Delta tables for instant analysis. OneLake also offers secure governance tools, including a new Secure tab in the catalog for managing permissions and a Govern tab for data oversight.
We’re also releasing the Azure AI Search integration with OneLake. By making this available in the Azure AI Foundry portal, we’re streamlining the experience for developers and data teams, helping them build smarter, more context-aware agents faster.
Our OneLake Table API preview allows apps to discover and inspect tables using Fabric’s security model, and OneLake diagnostics, enabling workspace owners to capture all data activity and storage operations.
Microsoft Fabric and Azure AI Foundry: A complete data, AI and agent ecosystem
In the AI era, every project is a data project, and success depends on reducing complexity. Microsoft is addressing this head-on by continuing to natively integrate Fabric and Azure AI Foundry together to help simplify how enterprises design, customize and manage AI apps and agents.
Fabric provides a single way to reason over data wherever it resides, delivering the structured, contextualized foundation AI needs. On top of that foundation, Azure AI Foundry enables developers to work with their favorite tools, including GitHub, Visual Studio and Copilot Studio, to efficiently build and scale AI applications and agents, while giving IT leaders visibility into performance, governance and ROI.
By bringing data, models and operations together, Fabric and Azure AI Foundry help businesses accelerate innovation and align AI initiatives with strategic goals. This unified approach eliminates complexity, speeds adoption and creates a platform-first advantage so organizations can unlock new value from their data and lead in the next generation of AI readiness.
Build the foundation, lead the future
The organizations leading this next chapter aren’t just deploying AI, they’re engineering for it. That starts with a foundation where data is unified, governed and now enriched with context so AI apps and agents can act confidently and scale without friction. Graph and Maps, enhanced developer tools, OneLake improvements and integration with Azure AI Foundry push Microsoft Fabric past data unification into AI‑ready, context‑rich data built for tomorrow’s AI challenges.
Those organizations are also skilling up. Thousands of Fabric users have passed their exams to achieve more than 50,000 certifications collectively for Foundry, Fabric Analytics Engineers and Fabric Data Engineers roles.
The future of AI belongs to platforms, not point solutions — ecosystems that connect data, intelligence and action. With that foundation, every agent, app and insight compounds value. Microsoft delivers that platform today, helping organizations unlock new levels of intelligence and impact.
Explore the full spectrum of new features coming to Fabric in today’s blog from Arun Ulagaratchagan, Corporate Vice President of Azure Data: “FabCon Vienna: Build data-rich agents on an enterprise-ready foundation.”
The post Microsoft leads shift beyond data unification to organization, delivering next-gen AI readiness with new Microsoft Fabric capabilities appeared first on The Official Microsoft Blog.
We’re in a hinge moment for AI. The experiments are over and the real work has begun. Centralizing data, once the finish line, is now the starting point. The definition of “AI readiness” is evolving as increasingly sophisticated agents demand rich, contextualized data grounded in business operations to deliver meaningful results. What sets leaders apart…
The post Microsoft leads shift beyond data unification to organization, delivering next-gen AI readiness with new Microsoft Fabric capabilities appeared first on The Official Microsoft Blog.Read More
create array and plot the function
I am trying to create an array of 100 input samples in the range of 1 to 100 using the linspace function, and plot the equation y(x)=20 log10(2x) on a semilogx plot. I would also like to draw a solid blue line of width 2, and label each point with a red circle. And I want to create an array of 100 input samples in the range of 1 to 100 using the logspace function, and plot the equation y(x)=20 log10(2x) on a semilogx plot with a solid red line of width 2, and label each point with a black star.I am trying to create an array of 100 input samples in the range of 1 to 100 using the linspace function, and plot the equation y(x)=20 log10(2x) on a semilogx plot. I would also like to draw a solid blue line of width 2, and label each point with a red circle. And I want to create an array of 100 input samples in the range of 1 to 100 using the logspace function, and plot the equation y(x)=20 log10(2x) on a semilogx plot with a solid red line of width 2, and label each point with a black star. I am trying to create an array of 100 input samples in the range of 1 to 100 using the linspace function, and plot the equation y(x)=20 log10(2x) on a semilogx plot. I would also like to draw a solid blue line of width 2, and label each point with a red circle. And I want to create an array of 100 input samples in the range of 1 to 100 using the logspace function, and plot the equation y(x)=20 log10(2x) on a semilogx plot with a solid red line of width 2, and label each point with a black star. array, plotting, complex MATLAB Answers — New Questions
new network license server
I currently have FlexLM installed on a physical node for MATLAB. Since we need to decommission this node, I must migrate to a new FlexLM installation on a virtual machine with MAC address fa:16:3e:03:a4:90. Could you please provide me with a link to a guide on how to perform this migration?I currently have FlexLM installed on a physical node for MATLAB. Since we need to decommission this node, I must migrate to a new FlexLM installation on a virtual machine with MAC address fa:16:3e:03:a4:90. Could you please provide me with a link to a guide on how to perform this migration? I currently have FlexLM installed on a physical node for MATLAB. Since we need to decommission this node, I must migrate to a new FlexLM installation on a virtual machine with MAC address fa:16:3e:03:a4:90. Could you please provide me with a link to a guide on how to perform this migration? flexlm MATLAB Answers — New Questions
C Caller error when passing 2D array inside struct via Simulink bus
I have a variable in C header defined as
typedef struct {
double a[1][4];
} temp
and a C function:
void Fun(Temp* temp) {
}
In Simulink, I try to send 1×4 data packed into a bus and pass it to the C Caller block, which calls Fun() as shown in the below images.
However, when building, I get the following errors:
m_6ytl4aiSKJUAISadDdLIqD.c
m_6ytl4aiSKJUAISadDdLIqD.c(52): error C2440: ‘=’: cannot convert from ‘float64 [4]’ to ‘real_T’
m_6ytl4aiSKJUAISadDdLIqD.c(57): error C2440: ‘=’: cannot convert from ‘float64 *’ to ‘real_T’
Interestingly, if I do not use the struct and bus, and instead directly pass a pure 1×4 array like this:
void Fun(double a[][4]) {
}
then it works fine.
So, the error occurs only when the 2D array is packaged inside a struct and transmitted via a Simulink bus.
Could you help me understand why this happens, and how to properly map the bus signal to the struct array in C Caller?
Thank you!I have a variable in C header defined as
typedef struct {
double a[1][4];
} temp
and a C function:
void Fun(Temp* temp) {
}
In Simulink, I try to send 1×4 data packed into a bus and pass it to the C Caller block, which calls Fun() as shown in the below images.
However, when building, I get the following errors:
m_6ytl4aiSKJUAISadDdLIqD.c
m_6ytl4aiSKJUAISadDdLIqD.c(52): error C2440: ‘=’: cannot convert from ‘float64 [4]’ to ‘real_T’
m_6ytl4aiSKJUAISadDdLIqD.c(57): error C2440: ‘=’: cannot convert from ‘float64 *’ to ‘real_T’
Interestingly, if I do not use the struct and bus, and instead directly pass a pure 1×4 array like this:
void Fun(double a[][4]) {
}
then it works fine.
So, the error occurs only when the 2D array is packaged inside a struct and transmitted via a Simulink bus.
Could you help me understand why this happens, and how to properly map the bus signal to the struct array in C Caller?
Thank you! I have a variable in C header defined as
typedef struct {
double a[1][4];
} temp
and a C function:
void Fun(Temp* temp) {
}
In Simulink, I try to send 1×4 data packed into a bus and pass it to the C Caller block, which calls Fun() as shown in the below images.
However, when building, I get the following errors:
m_6ytl4aiSKJUAISadDdLIqD.c
m_6ytl4aiSKJUAISadDdLIqD.c(52): error C2440: ‘=’: cannot convert from ‘float64 [4]’ to ‘real_T’
m_6ytl4aiSKJUAISadDdLIqD.c(57): error C2440: ‘=’: cannot convert from ‘float64 *’ to ‘real_T’
Interestingly, if I do not use the struct and bus, and instead directly pass a pure 1×4 array like this:
void Fun(double a[][4]) {
}
then it works fine.
So, the error occurs only when the 2D array is packaged inside a struct and transmitted via a Simulink bus.
Could you help me understand why this happens, and how to properly map the bus signal to the struct array in C Caller?
Thank you! simulink, c caller MATLAB Answers — New Questions
what do the AudioDeviceWriter do with a complex inputdata?
when i use ifft() to process and get a frame of audio data but forget to real() the results, it sounds weired.
so i wonder what happened when AudioDeviceWriter play with a complex inputdata?
i try it with a 1kHz pure tone and then plus it with a very little imag part,when played independently(commented the other),these two sound very different. but when i play the two in sequence with a pause(3),when the complex one first and real one latter,they sound the same(weired).
furthermore. when i change the order, put the real one first and the complex one latter, an error occured.
i’ll be glad to get an answer to the question ,thank you!
here is the code,you can change the order of adw(playbuff2) and adw(playbuff) then test ;
clc;
clear;
fs=48e3;
f0=1e3;
T=1;
N=1:T*fs;
t=N/fs;
wav=sin(2*pi*f0*t).’;
adw=audioDeviceWriter(‘SampleRate’,fs);
wav2=wav;
wav2=wav+1e-18*j*ones(size(wav));
playbuff2=[wav2];
playbuff=[wav];
adw(playbuff2);
pause(3);
adw(playbuff);when i use ifft() to process and get a frame of audio data but forget to real() the results, it sounds weired.
so i wonder what happened when AudioDeviceWriter play with a complex inputdata?
i try it with a 1kHz pure tone and then plus it with a very little imag part,when played independently(commented the other),these two sound very different. but when i play the two in sequence with a pause(3),when the complex one first and real one latter,they sound the same(weired).
furthermore. when i change the order, put the real one first and the complex one latter, an error occured.
i’ll be glad to get an answer to the question ,thank you!
here is the code,you can change the order of adw(playbuff2) and adw(playbuff) then test ;
clc;
clear;
fs=48e3;
f0=1e3;
T=1;
N=1:T*fs;
t=N/fs;
wav=sin(2*pi*f0*t).’;
adw=audioDeviceWriter(‘SampleRate’,fs);
wav2=wav;
wav2=wav+1e-18*j*ones(size(wav));
playbuff2=[wav2];
playbuff=[wav];
adw(playbuff2);
pause(3);
adw(playbuff); when i use ifft() to process and get a frame of audio data but forget to real() the results, it sounds weired.
so i wonder what happened when AudioDeviceWriter play with a complex inputdata?
i try it with a 1kHz pure tone and then plus it with a very little imag part,when played independently(commented the other),these two sound very different. but when i play the two in sequence with a pause(3),when the complex one first and real one latter,they sound the same(weired).
furthermore. when i change the order, put the real one first and the complex one latter, an error occured.
i’ll be glad to get an answer to the question ,thank you!
here is the code,you can change the order of adw(playbuff2) and adw(playbuff) then test ;
clc;
clear;
fs=48e3;
f0=1e3;
T=1;
N=1:T*fs;
t=N/fs;
wav=sin(2*pi*f0*t).’;
adw=audioDeviceWriter(‘SampleRate’,fs);
wav2=wav;
wav2=wav+1e-18*j*ones(size(wav));
playbuff2=[wav2];
playbuff=[wav];
adw(playbuff2);
pause(3);
adw(playbuff); audiodevicewriter, complex MATLAB Answers — New Questions
Time to peak using findpeaks
I’m using findpeaks to locate multiple peaks in my function but I want to know the rising time to the peak.
Findpeaks gives you the ‘width’ output but its not working because it suposses that the peak is in the middle of the valleys (and its based on the prominece, it would be nice to be referenced to lowestPoint/left valley) but most of these peaks are not symetric so its not working properly, any ideas?I’m using findpeaks to locate multiple peaks in my function but I want to know the rising time to the peak.
Findpeaks gives you the ‘width’ output but its not working because it suposses that the peak is in the middle of the valleys (and its based on the prominece, it would be nice to be referenced to lowestPoint/left valley) but most of these peaks are not symetric so its not working properly, any ideas? I’m using findpeaks to locate multiple peaks in my function but I want to know the rising time to the peak.
Findpeaks gives you the ‘width’ output but its not working because it suposses that the peak is in the middle of the valleys (and its based on the prominece, it would be nice to be referenced to lowestPoint/left valley) but most of these peaks are not symetric so its not working properly, any ideas? findpeaks, rising time MATLAB Answers — New Questions
Copilot Administrative Skills Don’t Do Much for SharePoint Management
SharePoint Skills in Copilot Won’t Impress SharePoint Administrators
Message Center notification MC1147976 (4 September 2025, Microsoft 365 roadmap item 501427) apparently heralds a new era of AI-enhanced administrative assistance for Microsoft 365 workloads. The post describes two skills to assist administrators in the SharePoint Admin Center:
- Step-by-step task guidance: Copilot provides clear instructions to help administrators complete common tasks.
- Multi-variable site search: Copilot enables administrators to search for sites using multiple conditions, such as inactivity, external sharing, and size, and suggests recommended actions.
The change will roll out in general availability worldwide from October 6, 2025. The capability showed up in my targeted release tenant, so I thought that I’d ask Copilot to help me to manage SharePoint Online, especially because of the promise that Copilot will help “both new and experienced admins complete tasks faster.” Alas, the skills exhibited by Copilot didn’t live up to expectations.
SharePoint Skills and the Promise of AI
Largely because of Teams, SharePoint Online administrators have many more sites to manage than in the past. It therefore makes perfect sense to apply artificial intelligence to help administrators detect potential problems that might be lurking or to find sites that need attention.
I started by asking Copilot to find which sites have most files. That seems like a pretty simple question for AI to answer, but it’s not and Copilot couldn’t answer, saying that it was unable to search for that criterion (Figure 1).

Hmmm… Such a response seems at odds with Microsoft’s promise that Copilot will strengthen governance at scale by allowing administrators to “ask complex questions and receive actionable results, making it easier to detect risks and enforce lifecycle policies across large environments.” Knowing which sites store most files seems like a fundamental piece of information from a data lifecycle perspective.
SharePoint Skills Need Data
The root of the problem is likely to be the data available for Copilot to reason over. All the Microsoft 365 admin centers present sets of data relevant to a workload through their UX. The Exchange admin center deals with mailboxes and other mail-enabled objects; the Entra admin center deals with directory objects; the Teams admin center deals with Teams policies and other team-related information, and so on. The information in these data sets is whatever’s accessible through and presented by the admin centers.
In the case of my question, the SharePoint Online admin center doesn’t have the data to respond because there’s nowhere in its UX that surfaces the file count for sites. In fact, although the SharePoint admin center reports the total number of files in the tenant, finding the file count for a site takes some effort unless you use the slightly outdated information that’s available through the site usage Graph API.
On the other hand, when I asked Copilot to “Find sites without a sensitivity label that have more than 1GB of storage,” the AI could respond because the storage used by each site is available in the SharePoint admin center (Figure 2).

Delivering the Promise
Tenant administrators have a lot to do, so any tool that can help is welcome. This is a first-run implementation, so it’s bound to have flaws. Copilot can offer limited help that novice administrators might welcome while not offering much to anyone with some experience. Microsoft is likely to iterate its Copilot assistance for SharePoint administrators to improve, deepen, and enhance what Copilot can offer, but I fear it will take several attempts before the promise of AI is delivered.
What SharePoint Skills Would Help Administrators?
This raises the question of what kind of assistance Microsoft 365 administrators might want AI tools incorporated into the admin centers to deliver? To me, the answer lies in bringing information together from available sources to answer questions faster than a human being can.
For example, SharePoint advanced management includes a change history report. It would be nice if an administrator could ask Copilot to review all changes made to SharePoint over the last month to report changes like sensitivity label updates for any site that generate label mismatches for documents. The information is available in audit logs and SharePoint document libraries, but it takes effort to bring everything together in a concise and understandable format. AI should be capable of answering questions like this instead of simple queries against site properties, which is all that Copilot can do today, and that is hardly a great example of AI in action.
Insight like this doesn’t come easily. You’ve got to know the technology and understand how to look behind the scenes. Benefit from the knowledge and experience of the Office 365 for IT Pros team by subscribing to the best eBook covering Office 365 and the wider Microsoft 365 ecosystem.