Month: November 2025
How can I view the Speedgoat target screen and system log on my host computer?
Can I view the display connected to my Speedgoat target on my host computer to monitor the Simulink Real-Time (SLRT) simulation and take screenshots?Can I view the display connected to my Speedgoat target on my host computer to monitor the Simulink Real-Time (SLRT) simulation and take screenshots? Can I view the display connected to my Speedgoat target on my host computer to monitor the Simulink Real-Time (SLRT) simulation and take screenshots? slrt, targetscreen, statusmonitor, systemlog, console, log MATLAB Answers — New Questions
Automating Microsoft 365 with PowerShell December 2025 Update
Update #18 for Our PowerShell eBook

As we normally do to free time to build the monthly update for the Office 365 for IT Pros eBook, we have released the monthly update (version 18) of the Automating Microsoft 365 with PowerShell eBook. This eBook is available separately or as part of the Office 365 for IT Pros package. The version number is clearly indicated on the inside front cover and in the footer of each page.
Subscribers can download the updated PDF and EPUB files using the link in the receipt emailed to them after their purchase. The link always fetches the latest book files.
We’ve also updated the paperback edition of Automating Microsoft 365 with PowerShell. The revised paperback should now be available. No updates are offered for this edition, so what’s printed on the pages is what you get. Even so, we still reckon that the printed content is well worth the price. 400 pages of PowerShell goodness, including hundreds of examples of how to use PowerShell and the Microsoft Graph APIs to automate Microsoft 365 processes.
Monthly Updates in Automating Microsoft 365 with PowerShell
Just like the main book, the monthly update for the PowerShell eBook includes a mix of minor changes, corrections, and new features. Probably the biggest new feature is the ability to restore soft-deleted security groups, just like it’s been possible to do for Microsoft 365 Groups since 2016. Given the widespread use of security groups (including dynamic security groups) to control other Entra policies, like group-based license assignment, it’s very welcome to have the ability to rescue a deleted group after an error resulted in its removal.
Another change is that the default app management policy can now be updated through the Entra admin center. This is important because the policy controls details such as whether apps can use app secrets for authentication (a horrible idea in production). Custom app management policies, which override the default tenant policy, can still only be managed through PowerShell.
Updated PowerShell Modules
During November Microsoft issued updates for the Microsoft Teams module (to V7.5) and the Microsoft Graph PowerShell SDK (V2.32). Apart from some cmdlet changes that might lead to being able to change the ownership of a meeting recording from the current default (the meeting organizer), there’s not much to say about the new Teams module. The assembly clash with the Microsoft Graph PowerShell SDK still exists.
Speaking of the SDK, it’s good to see that Microsoft has started to burn down the Microsoft Graph PowerShell SDK open issues list, which had got to a rather alarming level of well over 200 known problems. Some issues go back to the bad old days when SDK releases were bedevilled with poor quality, but that’s no reason not to investigate, fix (if necessary), and close the problems.
In any case, I believe that V2.31 and V2.32 are stable releases. Many of the issues reported for earlier releases are fixed and I haven’t encountered any big issues since I installed V2.32. When updating the SDK, make sure to consider the SDK modules loaded as resources into Azure Automation runtime environments too, and remember that the version of the Microsoft.Graph.Authentication module dictates the required version for all the other SDK in that runtime environment.
On to Version 19
The evidence that Microsoft 365 is in a state of constant flux is evident to anyone who looks through the message center in the Microsoft 365 admin center. The same is true for PowerShell even if the change is different nature and it’s spread across multiple modules and PowerShell itself. More importantly, change comes through acquired knowledge, which we hope to capture some of in Automating Microsoft 365 with PowerShell. Enjoy the book!
Hi everyone, have a question about PINN.
Hi everyone, recently I started to work on PINN. I tried to apply Lie symmetries enhanced PINN (sPINN). For this purpose, I tried to train the Kdv equation in [1] with the same conditions. The problem is, in theory, sPINN must give a better approximation than classic PINN, but in my code, I think something is missing. I changed the initial condition, equation, and added the symmetry condition of the code in [2].
[1] Enforcing continuous symmetries in physics-informed neural network for solving forward and inverse problems of partial differential equations
[2] https://www.mathworks.com/help/deeplearning/ug/solve-partial-differential-equations-with-lbfgs-method-and-deep-learning.html
clear all;
close all;
clc;
%% Generate Training Data
% The first 25 is for the left/right boundary
numBoundaryConditionPoints = [128 128];
% This creates a row vector of 25 elements
x0BC1 = zeros(1,numBoundaryConditionPoints(1));
x0BC2 = ones(1,numBoundaryConditionPoints(2));
% This creates a vector of 25 equally spaced time points between 0 and 1.
t0BC1 = linspace(0,1,numBoundaryConditionPoints(1));
t0BC2 = linspace(0,1,numBoundaryConditionPoints(2));
% Calculate boundary
u0BC1 = 12*sech(-4*t0BC1).^2;
u0BC2 = 12*sech(1 – 4*t0BC2).^2;
numInitialConditionPoints = 256;
x0IC = linspace(0,1,numInitialConditionPoints);
t0IC = zeros(1,numInitialConditionPoints);
% Initial condition
u0IC = 12*sech(x0IC).^2;
% Group together the data for initial and boundary conditions.
X0 = [x0IC x0BC1 x0BC2];
T0 = [t0IC t0BC1 t0BC2];
U0 = [u0IC u0BC1 u0BC2];
% Defining the Number of Points
numInternalCollocationPoints = 10000;
% Generating Random Points in a Unit Square
points = rand(numInternalCollocationPoints,2);
dataX = 2*points(:,1);
dataT = points(:,2);
%% Define Neural Network Architecture
numBlocks = 8;
fcOutputSize = 20;
% This creates a fundamental block that will be repeated:
fcBlock = [
fullyConnectedLayer(fcOutputSize)
tanhLayer];
layers = [
featureInputLayer(2) % featureInputLayer(2): This is the input layer that accepts your features. The 2 corresponds to (x, t) coordinates.
repmat(fcBlock,[numBlocks 1]) % Creates a deep network: Input → Block 1 → Block 2 → … → Block N
fullyConnectedLayer(1)
];
% Convert the layer array to a dlnetwork object.
net = dlnetwork(layers);
% Training a PINN can result in better accuracy when the learnable parameters have data type double.
% Convert the network learnables to double using the dlupdate function.
% Note that not all neural networks support learnables of type double, for example, networks that use GPU optimizations that rely on learnables with type single.
net = dlupdate(@double,net);
%% Define Model Loss Function
% This is the core loss function that makes Physics-Informed Neural Networks (PINNs) work! It’s where the "physics" is enforced.
function [loss,gradients] = modelLoss(net,X,T,X0,T0,U0)
% Make predictions with the initial conditions.
XT = cat(1,X,T);
U = forward(net,XT);
% Calculate derivatives with respect to X and T.
X = stripdims(X);
T = stripdims(T);
U = stripdims(U);
Ux = dljacobian(U,X,1);
Ut = dljacobian(U,T,1);
% Calculate second-order derivatives with respect to X.
Uxx = dldivergence(Ux,X,1);
Uxxx = dldivergence(Uxx,X,1);
% Calculate mseF. (Physics Loss 1: The PDE)
f = Ut + U.*Ux + Uxxx;
mseF = mean(f.^2);
% Calculate mseG. (Physics Loss 2: Your new constraint)
g = 4.*Ux + Ut;% + (X.*U)./2;
mseG = mean(g.^2);
% Calculate mseU. (Data Loss: Initial + Boundary)
XT0 = cat(1,X0,T0);
U0Pred = forward(net,XT0);
mseU = l2loss(U0Pred,U0);
% Calculated loss
loss = mseF + mseU + mseG;
% Calculate gradients with respect to the learnable parameters.
gradients = dlgradient(loss,net.Learnables);
end
%% Specify the training options:
solverState = lbfgsState;
maxIterations = 500;
gradientTolerance = 1e-5;
stepTolerance = 1e-5;
%% Train Neural Network
% Convert the training data to dlarray objects.
% Specify that the inputs X and T have format "BC" (batch, channel) and that the initial conditions have format "CB" (channel, batch).
X = dlarray(dataX,"BC");
T = dlarray(dataT,"BC");
X0 = dlarray(X0,"CB");
T0 = dlarray(T0,"CB");
U0 = dlarray(U0,"CB");
% Accelerate the loss function using the dlaccelerate function.
accfun = dlaccelerate(@modelLoss);
% Create a function handle containing the loss function for the L-BFGS update step.
% In order to evaluate the dlgradient function inside the modelLoss function using automatic differentiation, use the dlfeval function.
lossFcn = @(net) dlfeval(accfun,net,X,T,X0,T0,U0);
% Initialize the TrainingProgressMonitor object.
% At each iteration, plot the loss and monitor the norm of the gradients and steps.
% Because the timer starts when you create the monitor object, make sure that you create the object close to the training loop.
monitor = trainingProgressMonitor( …
Metrics="TrainingLoss", …
Info=["Iteration" "GradientsNorm" "StepNorm"], …
XLabel="Iteration");
iteration = 0;
while iteration < maxIterations && ~monitor.Stop
iteration = iteration + 1;
[net, solverState] = lbfgsupdate(net,lossFcn,solverState);
updateInfo(monitor, …
Iteration=iteration, …
GradientsNorm=solverState.GradientsNorm, …
StepNorm=solverState.StepNorm);
recordMetrics(monitor,iteration,TrainingLoss=solverState.Loss);
monitor.Progress = 100*iteration/maxIterations;
if solverState.GradientsNorm < gradientTolerance || …
solverState.StepNorm < stepTolerance || …
solverState.LineSearchStatus == "failed"
break
end
end
%% Exact solution for your specific case
function U = solveEq(X,T)
U = 12*sech(X-4.*T).^2;
end
%% Evaluate Model Accuracy
tTest = [0.25 0.5 0.75 1];
numObservationsTest = numel(tTest);
szXTest = 1001;
XTest = linspace(0,1,szXTest);
XTest = dlarray(XTest,"CB");
% Test the model.
UPred = zeros(numObservationsTest,szXTest);
UTest = zeros(numObservationsTest,szXTest);
for i = 1:numObservationsTest
t = tTest(i);
TTest = repmat(t,[1 szXTest]);
TTest = dlarray(TTest,"CB");
XTTest = cat(1,XTest,TTest);
UPred(i,:) = forward(net,XTTest);
UTest(i,:) = solveEq(extractdata(XTest),t);
end
err = norm(UPred – UTest) / norm(UTest);
fprintf(‘Relative error: %en’, err);
figure
tiledlayout("flow")
for i = 1:numel(tTest)
nexttile
plot(XTest,UPred(i,:),"-",LineWidth=2);
hold on
plot(XTest, UTest(i,:),"–",LineWidth=2)
hold off
ylim([0, 13])
xlabel("x")
ylabel("u(x," + t + ")")
end
legend(["Prediction" "Target"])
%% Create Density Plots with Rainbow Color Range
% Create a finer grid for density plots
xGrid = linspace(0, 1, 200);
tGrid = linspace(0, 1, 100);
[XGrid, TGrid] = meshgrid(xGrid, tGrid);
% Create predicted solution matrix
UPredDensity = zeros(length(tGrid), length(xGrid));
UTestDensity = zeros(length(tGrid), length(xGrid));
% Generate predictions for each point in the grid
for i = 1:length(tGrid)
for j = 1:length(xGrid)
% Predicted solution
XPoint = dlarray(xGrid(j), "CB");
TPoint = dlarray(tGrid(i), "CB");
XTPoint = cat(1, XPoint, TPoint);
UPredDensity(i,j) = extractdata(forward(net, XTPoint));
% Exact solution
UTestDensity(i,j) = solveEq(XGrid(j), TGrid(i));
end
end
% Create density plots
figure(‘Position’, [100, 100, 1200, 500]);
% Predicted solution density plot
subplot(1,2,1);
imagesc(xGrid, tGrid, UPredDensity);
colormap(jet); % Rainbow colormap
colorbar;
axis xy; % Correct orientation (time increasing downward)
xlabel(‘x’);
ylabel(‘t’);
title(‘Predicted Solution Density’);
set(gca, ‘FontSize’, 12);
% Exact solution density plot
subplot(1,2,2);
imagesc(xGrid, tGrid, UTestDensity);
colormap(jet); % Rainbow colormap
colorbar;
axis xy; % Correct orientation (time increasing downward)
xlabel(‘x’);
ylabel(‘t’);
title(‘Exact Solution Density’);
set(gca, ‘FontSize’, 12);
% Add a main title
sgtitle(‘Solution Comparison – Density Plots’, ‘FontSize’, 14, ‘FontWeight’, ‘bold’);
%% Line plots for specific time points (your original visualization)
figure(‘Position’, [100, 100, 1000, 800]);
tiledlayout("flow")
for i = 1:numel(tTest)
nexttile
plot(XTest,UPred(i,:),"-",LineWidth=2);
hold on
plot(XTest, UTest(i,:),"–",LineWidth=2)
hold off
ylim([0, 13])
xlabel("x")
ylabel("u(x," + tTest(i) + ")")
title(sprintf(‘t = %.2f’, tTest(i)))
end
legend(["Prediction" "Target"])
sgtitle(‘Solution Comparison – Time Slices’, ‘FontSize’, 14, ‘FontWeight’, ‘bold’);Hi everyone, recently I started to work on PINN. I tried to apply Lie symmetries enhanced PINN (sPINN). For this purpose, I tried to train the Kdv equation in [1] with the same conditions. The problem is, in theory, sPINN must give a better approximation than classic PINN, but in my code, I think something is missing. I changed the initial condition, equation, and added the symmetry condition of the code in [2].
[1] Enforcing continuous symmetries in physics-informed neural network for solving forward and inverse problems of partial differential equations
[2] https://www.mathworks.com/help/deeplearning/ug/solve-partial-differential-equations-with-lbfgs-method-and-deep-learning.html
clear all;
close all;
clc;
%% Generate Training Data
% The first 25 is for the left/right boundary
numBoundaryConditionPoints = [128 128];
% This creates a row vector of 25 elements
x0BC1 = zeros(1,numBoundaryConditionPoints(1));
x0BC2 = ones(1,numBoundaryConditionPoints(2));
% This creates a vector of 25 equally spaced time points between 0 and 1.
t0BC1 = linspace(0,1,numBoundaryConditionPoints(1));
t0BC2 = linspace(0,1,numBoundaryConditionPoints(2));
% Calculate boundary
u0BC1 = 12*sech(-4*t0BC1).^2;
u0BC2 = 12*sech(1 – 4*t0BC2).^2;
numInitialConditionPoints = 256;
x0IC = linspace(0,1,numInitialConditionPoints);
t0IC = zeros(1,numInitialConditionPoints);
% Initial condition
u0IC = 12*sech(x0IC).^2;
% Group together the data for initial and boundary conditions.
X0 = [x0IC x0BC1 x0BC2];
T0 = [t0IC t0BC1 t0BC2];
U0 = [u0IC u0BC1 u0BC2];
% Defining the Number of Points
numInternalCollocationPoints = 10000;
% Generating Random Points in a Unit Square
points = rand(numInternalCollocationPoints,2);
dataX = 2*points(:,1);
dataT = points(:,2);
%% Define Neural Network Architecture
numBlocks = 8;
fcOutputSize = 20;
% This creates a fundamental block that will be repeated:
fcBlock = [
fullyConnectedLayer(fcOutputSize)
tanhLayer];
layers = [
featureInputLayer(2) % featureInputLayer(2): This is the input layer that accepts your features. The 2 corresponds to (x, t) coordinates.
repmat(fcBlock,[numBlocks 1]) % Creates a deep network: Input → Block 1 → Block 2 → … → Block N
fullyConnectedLayer(1)
];
% Convert the layer array to a dlnetwork object.
net = dlnetwork(layers);
% Training a PINN can result in better accuracy when the learnable parameters have data type double.
% Convert the network learnables to double using the dlupdate function.
% Note that not all neural networks support learnables of type double, for example, networks that use GPU optimizations that rely on learnables with type single.
net = dlupdate(@double,net);
%% Define Model Loss Function
% This is the core loss function that makes Physics-Informed Neural Networks (PINNs) work! It’s where the "physics" is enforced.
function [loss,gradients] = modelLoss(net,X,T,X0,T0,U0)
% Make predictions with the initial conditions.
XT = cat(1,X,T);
U = forward(net,XT);
% Calculate derivatives with respect to X and T.
X = stripdims(X);
T = stripdims(T);
U = stripdims(U);
Ux = dljacobian(U,X,1);
Ut = dljacobian(U,T,1);
% Calculate second-order derivatives with respect to X.
Uxx = dldivergence(Ux,X,1);
Uxxx = dldivergence(Uxx,X,1);
% Calculate mseF. (Physics Loss 1: The PDE)
f = Ut + U.*Ux + Uxxx;
mseF = mean(f.^2);
% Calculate mseG. (Physics Loss 2: Your new constraint)
g = 4.*Ux + Ut;% + (X.*U)./2;
mseG = mean(g.^2);
% Calculate mseU. (Data Loss: Initial + Boundary)
XT0 = cat(1,X0,T0);
U0Pred = forward(net,XT0);
mseU = l2loss(U0Pred,U0);
% Calculated loss
loss = mseF + mseU + mseG;
% Calculate gradients with respect to the learnable parameters.
gradients = dlgradient(loss,net.Learnables);
end
%% Specify the training options:
solverState = lbfgsState;
maxIterations = 500;
gradientTolerance = 1e-5;
stepTolerance = 1e-5;
%% Train Neural Network
% Convert the training data to dlarray objects.
% Specify that the inputs X and T have format "BC" (batch, channel) and that the initial conditions have format "CB" (channel, batch).
X = dlarray(dataX,"BC");
T = dlarray(dataT,"BC");
X0 = dlarray(X0,"CB");
T0 = dlarray(T0,"CB");
U0 = dlarray(U0,"CB");
% Accelerate the loss function using the dlaccelerate function.
accfun = dlaccelerate(@modelLoss);
% Create a function handle containing the loss function for the L-BFGS update step.
% In order to evaluate the dlgradient function inside the modelLoss function using automatic differentiation, use the dlfeval function.
lossFcn = @(net) dlfeval(accfun,net,X,T,X0,T0,U0);
% Initialize the TrainingProgressMonitor object.
% At each iteration, plot the loss and monitor the norm of the gradients and steps.
% Because the timer starts when you create the monitor object, make sure that you create the object close to the training loop.
monitor = trainingProgressMonitor( …
Metrics="TrainingLoss", …
Info=["Iteration" "GradientsNorm" "StepNorm"], …
XLabel="Iteration");
iteration = 0;
while iteration < maxIterations && ~monitor.Stop
iteration = iteration + 1;
[net, solverState] = lbfgsupdate(net,lossFcn,solverState);
updateInfo(monitor, …
Iteration=iteration, …
GradientsNorm=solverState.GradientsNorm, …
StepNorm=solverState.StepNorm);
recordMetrics(monitor,iteration,TrainingLoss=solverState.Loss);
monitor.Progress = 100*iteration/maxIterations;
if solverState.GradientsNorm < gradientTolerance || …
solverState.StepNorm < stepTolerance || …
solverState.LineSearchStatus == "failed"
break
end
end
%% Exact solution for your specific case
function U = solveEq(X,T)
U = 12*sech(X-4.*T).^2;
end
%% Evaluate Model Accuracy
tTest = [0.25 0.5 0.75 1];
numObservationsTest = numel(tTest);
szXTest = 1001;
XTest = linspace(0,1,szXTest);
XTest = dlarray(XTest,"CB");
% Test the model.
UPred = zeros(numObservationsTest,szXTest);
UTest = zeros(numObservationsTest,szXTest);
for i = 1:numObservationsTest
t = tTest(i);
TTest = repmat(t,[1 szXTest]);
TTest = dlarray(TTest,"CB");
XTTest = cat(1,XTest,TTest);
UPred(i,:) = forward(net,XTTest);
UTest(i,:) = solveEq(extractdata(XTest),t);
end
err = norm(UPred – UTest) / norm(UTest);
fprintf(‘Relative error: %en’, err);
figure
tiledlayout("flow")
for i = 1:numel(tTest)
nexttile
plot(XTest,UPred(i,:),"-",LineWidth=2);
hold on
plot(XTest, UTest(i,:),"–",LineWidth=2)
hold off
ylim([0, 13])
xlabel("x")
ylabel("u(x," + t + ")")
end
legend(["Prediction" "Target"])
%% Create Density Plots with Rainbow Color Range
% Create a finer grid for density plots
xGrid = linspace(0, 1, 200);
tGrid = linspace(0, 1, 100);
[XGrid, TGrid] = meshgrid(xGrid, tGrid);
% Create predicted solution matrix
UPredDensity = zeros(length(tGrid), length(xGrid));
UTestDensity = zeros(length(tGrid), length(xGrid));
% Generate predictions for each point in the grid
for i = 1:length(tGrid)
for j = 1:length(xGrid)
% Predicted solution
XPoint = dlarray(xGrid(j), "CB");
TPoint = dlarray(tGrid(i), "CB");
XTPoint = cat(1, XPoint, TPoint);
UPredDensity(i,j) = extractdata(forward(net, XTPoint));
% Exact solution
UTestDensity(i,j) = solveEq(XGrid(j), TGrid(i));
end
end
% Create density plots
figure(‘Position’, [100, 100, 1200, 500]);
% Predicted solution density plot
subplot(1,2,1);
imagesc(xGrid, tGrid, UPredDensity);
colormap(jet); % Rainbow colormap
colorbar;
axis xy; % Correct orientation (time increasing downward)
xlabel(‘x’);
ylabel(‘t’);
title(‘Predicted Solution Density’);
set(gca, ‘FontSize’, 12);
% Exact solution density plot
subplot(1,2,2);
imagesc(xGrid, tGrid, UTestDensity);
colormap(jet); % Rainbow colormap
colorbar;
axis xy; % Correct orientation (time increasing downward)
xlabel(‘x’);
ylabel(‘t’);
title(‘Exact Solution Density’);
set(gca, ‘FontSize’, 12);
% Add a main title
sgtitle(‘Solution Comparison – Density Plots’, ‘FontSize’, 14, ‘FontWeight’, ‘bold’);
%% Line plots for specific time points (your original visualization)
figure(‘Position’, [100, 100, 1000, 800]);
tiledlayout("flow")
for i = 1:numel(tTest)
nexttile
plot(XTest,UPred(i,:),"-",LineWidth=2);
hold on
plot(XTest, UTest(i,:),"–",LineWidth=2)
hold off
ylim([0, 13])
xlabel("x")
ylabel("u(x," + tTest(i) + ")")
title(sprintf(‘t = %.2f’, tTest(i)))
end
legend(["Prediction" "Target"])
sgtitle(‘Solution Comparison – Time Slices’, ‘FontSize’, 14, ‘FontWeight’, ‘bold’); Hi everyone, recently I started to work on PINN. I tried to apply Lie symmetries enhanced PINN (sPINN). For this purpose, I tried to train the Kdv equation in [1] with the same conditions. The problem is, in theory, sPINN must give a better approximation than classic PINN, but in my code, I think something is missing. I changed the initial condition, equation, and added the symmetry condition of the code in [2].
[1] Enforcing continuous symmetries in physics-informed neural network for solving forward and inverse problems of partial differential equations
[2] https://www.mathworks.com/help/deeplearning/ug/solve-partial-differential-equations-with-lbfgs-method-and-deep-learning.html
clear all;
close all;
clc;
%% Generate Training Data
% The first 25 is for the left/right boundary
numBoundaryConditionPoints = [128 128];
% This creates a row vector of 25 elements
x0BC1 = zeros(1,numBoundaryConditionPoints(1));
x0BC2 = ones(1,numBoundaryConditionPoints(2));
% This creates a vector of 25 equally spaced time points between 0 and 1.
t0BC1 = linspace(0,1,numBoundaryConditionPoints(1));
t0BC2 = linspace(0,1,numBoundaryConditionPoints(2));
% Calculate boundary
u0BC1 = 12*sech(-4*t0BC1).^2;
u0BC2 = 12*sech(1 – 4*t0BC2).^2;
numInitialConditionPoints = 256;
x0IC = linspace(0,1,numInitialConditionPoints);
t0IC = zeros(1,numInitialConditionPoints);
% Initial condition
u0IC = 12*sech(x0IC).^2;
% Group together the data for initial and boundary conditions.
X0 = [x0IC x0BC1 x0BC2];
T0 = [t0IC t0BC1 t0BC2];
U0 = [u0IC u0BC1 u0BC2];
% Defining the Number of Points
numInternalCollocationPoints = 10000;
% Generating Random Points in a Unit Square
points = rand(numInternalCollocationPoints,2);
dataX = 2*points(:,1);
dataT = points(:,2);
%% Define Neural Network Architecture
numBlocks = 8;
fcOutputSize = 20;
% This creates a fundamental block that will be repeated:
fcBlock = [
fullyConnectedLayer(fcOutputSize)
tanhLayer];
layers = [
featureInputLayer(2) % featureInputLayer(2): This is the input layer that accepts your features. The 2 corresponds to (x, t) coordinates.
repmat(fcBlock,[numBlocks 1]) % Creates a deep network: Input → Block 1 → Block 2 → … → Block N
fullyConnectedLayer(1)
];
% Convert the layer array to a dlnetwork object.
net = dlnetwork(layers);
% Training a PINN can result in better accuracy when the learnable parameters have data type double.
% Convert the network learnables to double using the dlupdate function.
% Note that not all neural networks support learnables of type double, for example, networks that use GPU optimizations that rely on learnables with type single.
net = dlupdate(@double,net);
%% Define Model Loss Function
% This is the core loss function that makes Physics-Informed Neural Networks (PINNs) work! It’s where the "physics" is enforced.
function [loss,gradients] = modelLoss(net,X,T,X0,T0,U0)
% Make predictions with the initial conditions.
XT = cat(1,X,T);
U = forward(net,XT);
% Calculate derivatives with respect to X and T.
X = stripdims(X);
T = stripdims(T);
U = stripdims(U);
Ux = dljacobian(U,X,1);
Ut = dljacobian(U,T,1);
% Calculate second-order derivatives with respect to X.
Uxx = dldivergence(Ux,X,1);
Uxxx = dldivergence(Uxx,X,1);
% Calculate mseF. (Physics Loss 1: The PDE)
f = Ut + U.*Ux + Uxxx;
mseF = mean(f.^2);
% Calculate mseG. (Physics Loss 2: Your new constraint)
g = 4.*Ux + Ut;% + (X.*U)./2;
mseG = mean(g.^2);
% Calculate mseU. (Data Loss: Initial + Boundary)
XT0 = cat(1,X0,T0);
U0Pred = forward(net,XT0);
mseU = l2loss(U0Pred,U0);
% Calculated loss
loss = mseF + mseU + mseG;
% Calculate gradients with respect to the learnable parameters.
gradients = dlgradient(loss,net.Learnables);
end
%% Specify the training options:
solverState = lbfgsState;
maxIterations = 500;
gradientTolerance = 1e-5;
stepTolerance = 1e-5;
%% Train Neural Network
% Convert the training data to dlarray objects.
% Specify that the inputs X and T have format "BC" (batch, channel) and that the initial conditions have format "CB" (channel, batch).
X = dlarray(dataX,"BC");
T = dlarray(dataT,"BC");
X0 = dlarray(X0,"CB");
T0 = dlarray(T0,"CB");
U0 = dlarray(U0,"CB");
% Accelerate the loss function using the dlaccelerate function.
accfun = dlaccelerate(@modelLoss);
% Create a function handle containing the loss function for the L-BFGS update step.
% In order to evaluate the dlgradient function inside the modelLoss function using automatic differentiation, use the dlfeval function.
lossFcn = @(net) dlfeval(accfun,net,X,T,X0,T0,U0);
% Initialize the TrainingProgressMonitor object.
% At each iteration, plot the loss and monitor the norm of the gradients and steps.
% Because the timer starts when you create the monitor object, make sure that you create the object close to the training loop.
monitor = trainingProgressMonitor( …
Metrics="TrainingLoss", …
Info=["Iteration" "GradientsNorm" "StepNorm"], …
XLabel="Iteration");
iteration = 0;
while iteration < maxIterations && ~monitor.Stop
iteration = iteration + 1;
[net, solverState] = lbfgsupdate(net,lossFcn,solverState);
updateInfo(monitor, …
Iteration=iteration, …
GradientsNorm=solverState.GradientsNorm, …
StepNorm=solverState.StepNorm);
recordMetrics(monitor,iteration,TrainingLoss=solverState.Loss);
monitor.Progress = 100*iteration/maxIterations;
if solverState.GradientsNorm < gradientTolerance || …
solverState.StepNorm < stepTolerance || …
solverState.LineSearchStatus == "failed"
break
end
end
%% Exact solution for your specific case
function U = solveEq(X,T)
U = 12*sech(X-4.*T).^2;
end
%% Evaluate Model Accuracy
tTest = [0.25 0.5 0.75 1];
numObservationsTest = numel(tTest);
szXTest = 1001;
XTest = linspace(0,1,szXTest);
XTest = dlarray(XTest,"CB");
% Test the model.
UPred = zeros(numObservationsTest,szXTest);
UTest = zeros(numObservationsTest,szXTest);
for i = 1:numObservationsTest
t = tTest(i);
TTest = repmat(t,[1 szXTest]);
TTest = dlarray(TTest,"CB");
XTTest = cat(1,XTest,TTest);
UPred(i,:) = forward(net,XTTest);
UTest(i,:) = solveEq(extractdata(XTest),t);
end
err = norm(UPred – UTest) / norm(UTest);
fprintf(‘Relative error: %en’, err);
figure
tiledlayout("flow")
for i = 1:numel(tTest)
nexttile
plot(XTest,UPred(i,:),"-",LineWidth=2);
hold on
plot(XTest, UTest(i,:),"–",LineWidth=2)
hold off
ylim([0, 13])
xlabel("x")
ylabel("u(x," + t + ")")
end
legend(["Prediction" "Target"])
%% Create Density Plots with Rainbow Color Range
% Create a finer grid for density plots
xGrid = linspace(0, 1, 200);
tGrid = linspace(0, 1, 100);
[XGrid, TGrid] = meshgrid(xGrid, tGrid);
% Create predicted solution matrix
UPredDensity = zeros(length(tGrid), length(xGrid));
UTestDensity = zeros(length(tGrid), length(xGrid));
% Generate predictions for each point in the grid
for i = 1:length(tGrid)
for j = 1:length(xGrid)
% Predicted solution
XPoint = dlarray(xGrid(j), "CB");
TPoint = dlarray(tGrid(i), "CB");
XTPoint = cat(1, XPoint, TPoint);
UPredDensity(i,j) = extractdata(forward(net, XTPoint));
% Exact solution
UTestDensity(i,j) = solveEq(XGrid(j), TGrid(i));
end
end
% Create density plots
figure(‘Position’, [100, 100, 1200, 500]);
% Predicted solution density plot
subplot(1,2,1);
imagesc(xGrid, tGrid, UPredDensity);
colormap(jet); % Rainbow colormap
colorbar;
axis xy; % Correct orientation (time increasing downward)
xlabel(‘x’);
ylabel(‘t’);
title(‘Predicted Solution Density’);
set(gca, ‘FontSize’, 12);
% Exact solution density plot
subplot(1,2,2);
imagesc(xGrid, tGrid, UTestDensity);
colormap(jet); % Rainbow colormap
colorbar;
axis xy; % Correct orientation (time increasing downward)
xlabel(‘x’);
ylabel(‘t’);
title(‘Exact Solution Density’);
set(gca, ‘FontSize’, 12);
% Add a main title
sgtitle(‘Solution Comparison – Density Plots’, ‘FontSize’, 14, ‘FontWeight’, ‘bold’);
%% Line plots for specific time points (your original visualization)
figure(‘Position’, [100, 100, 1000, 800]);
tiledlayout("flow")
for i = 1:numel(tTest)
nexttile
plot(XTest,UPred(i,:),"-",LineWidth=2);
hold on
plot(XTest, UTest(i,:),"–",LineWidth=2)
hold off
ylim([0, 13])
xlabel("x")
ylabel("u(x," + tTest(i) + ")")
title(sprintf(‘t = %.2f’, tTest(i)))
end
legend(["Prediction" "Target"])
sgtitle(‘Solution Comparison – Time Slices’, ‘FontSize’, 14, ‘FontWeight’, ‘bold’); pinn, deep learning, neural network, neural networks, pde, physics-informed neural network MATLAB Answers — New Questions
Optmize Model Hyperparameters using Directforecaster
Dear all,
Is it possible to optmize the hyperparameters when using the directforecaster function to predict times series?
clear
close
clc
Tbl = importAndPreprocessPortData;
Tbl.Year = year(Tbl.Time);
Tbl.Quarter = quarter(TEU.Time);
slidingWindowPartition = tspartition(height(Tbl),"SlidingWindow", 4, "TestSize", 24)
Mdl = directforecaster(Tbl, "TEU", "Horizon", 1:12, "Learner", "lsboost", "ResponseLags", 1:12, …
"LeadingPredictors", "all", "LeadingPredictorLags", {0:12, 0:12}, …
"Partition", slidingWindowPartition, "CategoricalPredictors", "Quarter") % I would like to optmize lsboost hyperparameters
predY = cvpredict(Mdl)Dear all,
Is it possible to optmize the hyperparameters when using the directforecaster function to predict times series?
clear
close
clc
Tbl = importAndPreprocessPortData;
Tbl.Year = year(Tbl.Time);
Tbl.Quarter = quarter(TEU.Time);
slidingWindowPartition = tspartition(height(Tbl),"SlidingWindow", 4, "TestSize", 24)
Mdl = directforecaster(Tbl, "TEU", "Horizon", 1:12, "Learner", "lsboost", "ResponseLags", 1:12, …
"LeadingPredictors", "all", "LeadingPredictorLags", {0:12, 0:12}, …
"Partition", slidingWindowPartition, "CategoricalPredictors", "Quarter") % I would like to optmize lsboost hyperparameters
predY = cvpredict(Mdl) Dear all,
Is it possible to optmize the hyperparameters when using the directforecaster function to predict times series?
clear
close
clc
Tbl = importAndPreprocessPortData;
Tbl.Year = year(Tbl.Time);
Tbl.Quarter = quarter(TEU.Time);
slidingWindowPartition = tspartition(height(Tbl),"SlidingWindow", 4, "TestSize", 24)
Mdl = directforecaster(Tbl, "TEU", "Horizon", 1:12, "Learner", "lsboost", "ResponseLags", 1:12, …
"LeadingPredictors", "all", "LeadingPredictorLags", {0:12, 0:12}, …
"Partition", slidingWindowPartition, "CategoricalPredictors", "Quarter") % I would like to optmize lsboost hyperparameters
predY = cvpredict(Mdl) directforecaster, time series MATLAB Answers — New Questions
How to analyze reaction time and slicing accuracy data from a simple game like Slice Master in MATLAB?
Hi everyone, I’ve been experimenting with a reflex-based game called Slice Master to study human reaction times and precision. I can export gameplay logs that include timestamps for each slice, slice angle, and whether the slice was “perfect” or not.
My goal is to import this data into MATLAB and:
Compute reaction times between successive slices.
Plot a histogram of slice-to-slice reaction times.
Analyze the distribution of slice angles (e.g., deviation from ideal).
Identify “runs” of perfect slices (longest streak, average streak length).
Possibly apply a smoothing filter or clustering to reaction times to detect outliers or performance shifts.
What MATLAB functions / toolboxes would you recommend for these tasks? And what is the best way to structure this kind of gameplay data for analysis (e.g., timetable, table, struct)? Any example code snippets or guidance would be greatly appreciated!Hi everyone, I’ve been experimenting with a reflex-based game called Slice Master to study human reaction times and precision. I can export gameplay logs that include timestamps for each slice, slice angle, and whether the slice was “perfect” or not.
My goal is to import this data into MATLAB and:
Compute reaction times between successive slices.
Plot a histogram of slice-to-slice reaction times.
Analyze the distribution of slice angles (e.g., deviation from ideal).
Identify “runs” of perfect slices (longest streak, average streak length).
Possibly apply a smoothing filter or clustering to reaction times to detect outliers or performance shifts.
What MATLAB functions / toolboxes would you recommend for these tasks? And what is the best way to structure this kind of gameplay data for analysis (e.g., timetable, table, struct)? Any example code snippets or guidance would be greatly appreciated! Hi everyone, I’ve been experimenting with a reflex-based game called Slice Master to study human reaction times and precision. I can export gameplay logs that include timestamps for each slice, slice angle, and whether the slice was “perfect” or not.
My goal is to import this data into MATLAB and:
Compute reaction times between successive slices.
Plot a histogram of slice-to-slice reaction times.
Analyze the distribution of slice angles (e.g., deviation from ideal).
Identify “runs” of perfect slices (longest streak, average streak length).
Possibly apply a smoothing filter or clustering to reaction times to detect outliers or performance shifts.
What MATLAB functions / toolboxes would you recommend for these tasks? And what is the best way to structure this kind of gameplay data for analysis (e.g., timetable, table, struct)? Any example code snippets or guidance would be greatly appreciated! matrix, game MATLAB Answers — New Questions
Generating Toeplitz Matrix which Matches the Convolution Shape Same
Given a filter vH I’m looking for vectors vR and vC such that:
toeplitz(vC, vR) * vX = conv(vX, vH, ‘same’);
For instance, for vH = [1, 2, 3, 4] and length(vX) = 7; the matrix is given by:
mH =
3 2 1 0 0 0 0
4 3 2 1 0 0 0
0 4 3 2 1 0 0
0 0 4 3 2 1 0
0 0 0 4 3 2 1
0 0 0 0 4 3 2
0 0 0 0 0 4 3Given a filter vH I’m looking for vectors vR and vC such that:
toeplitz(vC, vR) * vX = conv(vX, vH, ‘same’);
For instance, for vH = [1, 2, 3, 4] and length(vX) = 7; the matrix is given by:
mH =
3 2 1 0 0 0 0
4 3 2 1 0 0 0
0 4 3 2 1 0 0
0 0 4 3 2 1 0
0 0 0 4 3 2 1
0 0 0 0 4 3 2
0 0 0 0 0 4 3 Given a filter vH I’m looking for vectors vR and vC such that:
toeplitz(vC, vR) * vX = conv(vX, vH, ‘same’);
For instance, for vH = [1, 2, 3, 4] and length(vX) = 7; the matrix is given by:
mH =
3 2 1 0 0 0 0
4 3 2 1 0 0 0
0 4 3 2 1 0 0
0 0 4 3 2 1 0
0 0 0 4 3 2 1
0 0 0 0 4 3 2
0 0 0 0 0 4 3 convolution, matrix, toeplitz, convolution-matrix MATLAB Answers — New Questions
How to open variables window in the same place with workspace?
I have used MATLAB for a long time, and I am happy with it. I used to open the workspace windows and the variables windows in the same place (demonstrated in the figure) in R2024 and before. But since version R2025, I cannot do this anymore. Can someone help me solve this problem? I really enjoy it when I can see my workspace and variable at the same time, without switching between them; that is complicated.I have used MATLAB for a long time, and I am happy with it. I used to open the workspace windows and the variables windows in the same place (demonstrated in the figure) in R2024 and before. But since version R2025, I cannot do this anymore. Can someone help me solve this problem? I really enjoy it when I can see my workspace and variable at the same time, without switching between them; that is complicated. I have used MATLAB for a long time, and I am happy with it. I used to open the workspace windows and the variables windows in the same place (demonstrated in the figure) in R2024 and before. But since version R2025, I cannot do this anymore. Can someone help me solve this problem? I really enjoy it when I can see my workspace and variable at the same time, without switching between them; that is complicated. user interface, matlab compiler, workspace, variables MATLAB Answers — New Questions
Purview Launches New DLP Policy to Control Copilot Prompts
DLP Policy for Copilot Chat Uses Sensitive Information Types to Detect Issues in User Prompts
As mentioned in my notes from the first day of Ignite 2025, Microsoft is rolling out a new DLP policy capability in preview to govern how people use sensitive data in Microsoft Copilot chat to “safeguard prompts.” The new policy works by detecting attempts to use sensitive information types (SITs) in prompts. The update is documented in message center notification MC1181998, last updated 12 November 2025, Microsoft 365 roadmap item 515945). The preview lasts from now until late December 2025 and Microsoft is aiming for general availability in late March 2026.
The original DLP policy for Microsoft Copilot blocks access to Office files and PDFs labeled with specific sensitivity labels. The sensitivity label-based policy stops Copilot using information stored in the labeled files in its responses to user prompts. The mechanism works well and is highly effective at preserving file confidentiality.
Separate DLP Policy Required
The new capability cannot be incorporated into an existing DLP policy for Copilot. A new policy is required to specify the set of sensitive information types like credit card numbers, bank account numbers, passport numbers, and so on for DLP to check against when users issue prompts to Copilot.
The two types of DLP policies for Copilot run quite happily alongside each other because each type of policy deals with very different information. Administrators must be a member of the Data Security AI admins role group (or a higher role group, like Organization Management) to configure DLP policies.
Microsoft maintains a set of over 300 sensitive information types for use with DLP and other Purview solutions. Most sensitive information types are pattern-based classifiers. Broadly speaking, many of the standard classifiers use Regex patterns to find matches.
Purview includes methods to generate custom sensitive information types, including through document fingerprinting. For instance, I generated a sensitive information type by processing samples of the U.S. W-8BEN tax form. Sensitive information types created using document fingerprinting cannot be used with the DLP policy for Copilot. I only discovered this when I attempted to use the type when defining the set of sensitive information types to scan for in a policy rule (Figure 1).

Using the DLP Policy for Copilot Prompts
Like the earlier policy, DLP works with Copilot chat in both the app (BizChat) and the chat function in the Office apps. The policy works for both the free and paid-for versions of Copilot Chat.
Figure 2 shows a very simple example. The user knows about a social security number and has used that sensitive information in a prompt to ask Copilot if it can locate the employee that the social security number belongs to. In normal circumstances, Copilot could consult Graph resources like SharePoint files, email messages, or Teams conversations to respond to the prompt. With the DLP policy in place, Copilot politely declines to handle the query.

DLP for Education
The use of DLP policies to prevent people from using sensitive information types in Copilot prompts is a good example of how DLP can educate users about the proper handling of this kind of data. Nothing bad happens from a user perspective. Copilot declines to deal with the query and life goes on. Perhaps a future version of the policy will allow some force of stricter enforcement, such as monitoring how often users try to use blocked sensitive information types in prompts to give frequent offenders more pointed advice. I guess we’ll see!
Support the work of the Office 365 for IT Pros team by subscribing to the Office 365 for IT Pros eBook. Your support pays for the time we need to track, analyze, and document the changing world of Microsoft 365 and Office 365. Only humans contribute to our work!
insAccelerometer Documentation h(x)
I am currently working on building an Extended Kalman Filter with Accelerometer measurements and looked into the implementation of the h(x) measurement equation.
In the documentation and in the file this equation is, when estimating the acceleration in the state, h(x)=g_sensor+a_sensor+bias which is equal to h(x)=bias+R_sensor->navigation*(g_navigation+a_navigation). However, in the actual implementation of the measurement(sensor, filt) function h(x) appears to be calculated by h(x)=bias-R_sensor->navigation*(a_navigation – g_navigation).
Does anynone know why the actual implementation seems to be different to the documented equation?
Thanks in advance!I am currently working on building an Extended Kalman Filter with Accelerometer measurements and looked into the implementation of the h(x) measurement equation.
In the documentation and in the file this equation is, when estimating the acceleration in the state, h(x)=g_sensor+a_sensor+bias which is equal to h(x)=bias+R_sensor->navigation*(g_navigation+a_navigation). However, in the actual implementation of the measurement(sensor, filt) function h(x) appears to be calculated by h(x)=bias-R_sensor->navigation*(a_navigation – g_navigation).
Does anynone know why the actual implementation seems to be different to the documented equation?
Thanks in advance! I am currently working on building an Extended Kalman Filter with Accelerometer measurements and looked into the implementation of the h(x) measurement equation.
In the documentation and in the file this equation is, when estimating the acceleration in the state, h(x)=g_sensor+a_sensor+bias which is equal to h(x)=bias+R_sensor->navigation*(g_navigation+a_navigation). However, in the actual implementation of the measurement(sensor, filt) function h(x) appears to be calculated by h(x)=bias-R_sensor->navigation*(a_navigation – g_navigation).
Does anynone know why the actual implementation seems to be different to the documented equation?
Thanks in advance! insekf, insaccelerometer, sensor fusion MATLAB Answers — New Questions
Maple toolbox for MATLAB
Is there anybody who is actively using latest version 2025 Maple toolbox (as a part of Maple) with MATLAB R2025b? I am not able to install the Maple toolbox on MATLAB R2025b on Linux (installation process hangs).
I need to use the Maple toolbox for complicated symbolic computing which is not possible to perform via MATLAB symbolic toolbox (many symbolic matrix exponentials of 6×6 symbolic matrices, which takes on Matlab symbolic toolbox enormous CPU time).Is there anybody who is actively using latest version 2025 Maple toolbox (as a part of Maple) with MATLAB R2025b? I am not able to install the Maple toolbox on MATLAB R2025b on Linux (installation process hangs).
I need to use the Maple toolbox for complicated symbolic computing which is not possible to perform via MATLAB symbolic toolbox (many symbolic matrix exponentials of 6×6 symbolic matrices, which takes on Matlab symbolic toolbox enormous CPU time). Is there anybody who is actively using latest version 2025 Maple toolbox (as a part of Maple) with MATLAB R2025b? I am not able to install the Maple toolbox on MATLAB R2025b on Linux (installation process hangs).
I need to use the Maple toolbox for complicated symbolic computing which is not possible to perform via MATLAB symbolic toolbox (many symbolic matrix exponentials of 6×6 symbolic matrices, which takes on Matlab symbolic toolbox enormous CPU time). maple MATLAB Answers — New Questions
Why am I unable to validate my LSF cluster profile in the Parallel Computing Toolbox?
I have MATLAB Parallel Server set up on a cluster running LSF. When I attempt to validate the cluster profile it fails.I have MATLAB Parallel Server set up on a cluster running LSF. When I attempt to validate the cluster profile it fails. I have MATLAB Parallel Server set up on a cluster running LSF. When I attempt to validate the cluster profile it fails. MATLAB Answers — New Questions
FFT problem: FFT -> some manipulation -> IFFT results in one-element shift
I am trying to take some random noise (produced by a different script), Fourier transform it, multiply it by a filter, and then do an inverse Fourier transform to get some random noise data back. When I was testing my script with a filter which has a value of 1 for all frequencies (i.e. should give back the original data after IFFT), I realized all of the resulting values are shifted by 1 element. I.e., if I plot the original vs multiplied (‘calibrated’) noise it looks like this:
but then if I just circshift one of the arrays, either the noise given or the post-FT noise, by 1 (in opposite directions), it looks like this:
(and the values of the two arrays are equal to within 1e-15 tolerance, which is good enough for my purpose). I suspect the problem is how I’m setting up my frequency arrays/interpolating my actual function I need to multiply by, but I’m pretty much stuck after that–any suggestions would be appreciated.
More in-depth info
Full script is attached (spaghetti code beware, apologies in advance), but it has a lot of plots/troubleshooting, so here is the relevant code:
%%read files
[tempFile, tempPath] = uigetfile(‘*.csv’, ‘Select the calibration data file’);
calData = readmatrix(fullfile(tempPath, tempFile));
[tempFile2, tempPath2] = uigetfile(‘*.txt’, ‘Select the random noise’);
randNoise = readmatrix(fullfile(tempPath2, tempFile2));
%% Sort and format calibration daya
%sort calibration data by period
clDatSorted=sortrows(calData,[1]);
%extract data
periods = clDatSorted(:,1);
ampsRequired = clDatSorted(:,2);
%mirror the function in the negative frequency domain due to the nature of
%the FT function
ampsRequired = [flip(ampsRequired);ampsRequired];
periods = [flip(periods)*-1;periods];
freqs = 1./periods;
%% Fourier transform the random noise and plot
fourierNoise = fft(randNoise);
fourierNoise = fftshift(fourierNoise); %fftshift used here to center zero-frequency value
Ts = inputdlg(‘What is the sample period?’); %sample period; user defined because of the way the script to generate my noise is set up
Ts=str2num(Ts{1}); %convert to a number
f= ((-length(fourierNoise)/2):((length(fourierNoise)/2)-1))*fs/length(fourierNoise);
%% interpolate calibration function
%this is going to be important because my calibration function is manually
%obtained so I will need to extrapolate *and* interpolate, unfortunately–I
%can’t just give it a smooth function due to the nature of the data
interpolatedCal=interp1(freqs(2:end-1),ampsRequired(2:end-1),f,’linear’,’extrap’);
%% calibrate
calibratedFourierNoise = interpolatedCal .* fourierNoise’;
calibratedNoise = ifft(ifftshift(calibratedFourierNoise));
if size(calibratedNoise)~=size(randNoise)
calibratedNoise = flip(calibratedNoise); %to fix row-column swaps–I know this needs a better fix
end
And
ismembertol(circshift(randNoise,[0, 1]),calibratedNoise,1e-15)
produces 1’s everywhere.
I think the line where I define my frequency:
f= ((-length(fourierNoise)/2):((length(fourierNoise)/2)-1))*fs/length(fourierNoise);
is the root of the problem but I’ve tried playing around with it and keep encountering the same issue.
In particular, if I do
[p,fp] = pspectrum(randNoise);
the maximum frequency it gives me is 3.14 for this data, and I’m assuming that the max frequency for the Fourier transform is 1/2 the number of random noise values per second–I feel like this is an issue with my lack of deep understanding of Fourier theory tbh, but I can’t pinpoint the exact issue.
Full script/test calibration file with ones everywhere (i.e. flat filter)/sample noise/sample calibration (with ones everywhere as used for the test in sampleCalOneEverywhere and as the filter I’ll aactually be using in sampleCal1) are attached.
(Not super relevant but the final filter will be an empirically-obtained low-pass filter with some added stuff.)I am trying to take some random noise (produced by a different script), Fourier transform it, multiply it by a filter, and then do an inverse Fourier transform to get some random noise data back. When I was testing my script with a filter which has a value of 1 for all frequencies (i.e. should give back the original data after IFFT), I realized all of the resulting values are shifted by 1 element. I.e., if I plot the original vs multiplied (‘calibrated’) noise it looks like this:
but then if I just circshift one of the arrays, either the noise given or the post-FT noise, by 1 (in opposite directions), it looks like this:
(and the values of the two arrays are equal to within 1e-15 tolerance, which is good enough for my purpose). I suspect the problem is how I’m setting up my frequency arrays/interpolating my actual function I need to multiply by, but I’m pretty much stuck after that–any suggestions would be appreciated.
More in-depth info
Full script is attached (spaghetti code beware, apologies in advance), but it has a lot of plots/troubleshooting, so here is the relevant code:
%%read files
[tempFile, tempPath] = uigetfile(‘*.csv’, ‘Select the calibration data file’);
calData = readmatrix(fullfile(tempPath, tempFile));
[tempFile2, tempPath2] = uigetfile(‘*.txt’, ‘Select the random noise’);
randNoise = readmatrix(fullfile(tempPath2, tempFile2));
%% Sort and format calibration daya
%sort calibration data by period
clDatSorted=sortrows(calData,[1]);
%extract data
periods = clDatSorted(:,1);
ampsRequired = clDatSorted(:,2);
%mirror the function in the negative frequency domain due to the nature of
%the FT function
ampsRequired = [flip(ampsRequired);ampsRequired];
periods = [flip(periods)*-1;periods];
freqs = 1./periods;
%% Fourier transform the random noise and plot
fourierNoise = fft(randNoise);
fourierNoise = fftshift(fourierNoise); %fftshift used here to center zero-frequency value
Ts = inputdlg(‘What is the sample period?’); %sample period; user defined because of the way the script to generate my noise is set up
Ts=str2num(Ts{1}); %convert to a number
f= ((-length(fourierNoise)/2):((length(fourierNoise)/2)-1))*fs/length(fourierNoise);
%% interpolate calibration function
%this is going to be important because my calibration function is manually
%obtained so I will need to extrapolate *and* interpolate, unfortunately–I
%can’t just give it a smooth function due to the nature of the data
interpolatedCal=interp1(freqs(2:end-1),ampsRequired(2:end-1),f,’linear’,’extrap’);
%% calibrate
calibratedFourierNoise = interpolatedCal .* fourierNoise’;
calibratedNoise = ifft(ifftshift(calibratedFourierNoise));
if size(calibratedNoise)~=size(randNoise)
calibratedNoise = flip(calibratedNoise); %to fix row-column swaps–I know this needs a better fix
end
And
ismembertol(circshift(randNoise,[0, 1]),calibratedNoise,1e-15)
produces 1’s everywhere.
I think the line where I define my frequency:
f= ((-length(fourierNoise)/2):((length(fourierNoise)/2)-1))*fs/length(fourierNoise);
is the root of the problem but I’ve tried playing around with it and keep encountering the same issue.
In particular, if I do
[p,fp] = pspectrum(randNoise);
the maximum frequency it gives me is 3.14 for this data, and I’m assuming that the max frequency for the Fourier transform is 1/2 the number of random noise values per second–I feel like this is an issue with my lack of deep understanding of Fourier theory tbh, but I can’t pinpoint the exact issue.
Full script/test calibration file with ones everywhere (i.e. flat filter)/sample noise/sample calibration (with ones everywhere as used for the test in sampleCalOneEverywhere and as the filter I’ll aactually be using in sampleCal1) are attached.
(Not super relevant but the final filter will be an empirically-obtained low-pass filter with some added stuff.) I am trying to take some random noise (produced by a different script), Fourier transform it, multiply it by a filter, and then do an inverse Fourier transform to get some random noise data back. When I was testing my script with a filter which has a value of 1 for all frequencies (i.e. should give back the original data after IFFT), I realized all of the resulting values are shifted by 1 element. I.e., if I plot the original vs multiplied (‘calibrated’) noise it looks like this:
but then if I just circshift one of the arrays, either the noise given or the post-FT noise, by 1 (in opposite directions), it looks like this:
(and the values of the two arrays are equal to within 1e-15 tolerance, which is good enough for my purpose). I suspect the problem is how I’m setting up my frequency arrays/interpolating my actual function I need to multiply by, but I’m pretty much stuck after that–any suggestions would be appreciated.
More in-depth info
Full script is attached (spaghetti code beware, apologies in advance), but it has a lot of plots/troubleshooting, so here is the relevant code:
%%read files
[tempFile, tempPath] = uigetfile(‘*.csv’, ‘Select the calibration data file’);
calData = readmatrix(fullfile(tempPath, tempFile));
[tempFile2, tempPath2] = uigetfile(‘*.txt’, ‘Select the random noise’);
randNoise = readmatrix(fullfile(tempPath2, tempFile2));
%% Sort and format calibration daya
%sort calibration data by period
clDatSorted=sortrows(calData,[1]);
%extract data
periods = clDatSorted(:,1);
ampsRequired = clDatSorted(:,2);
%mirror the function in the negative frequency domain due to the nature of
%the FT function
ampsRequired = [flip(ampsRequired);ampsRequired];
periods = [flip(periods)*-1;periods];
freqs = 1./periods;
%% Fourier transform the random noise and plot
fourierNoise = fft(randNoise);
fourierNoise = fftshift(fourierNoise); %fftshift used here to center zero-frequency value
Ts = inputdlg(‘What is the sample period?’); %sample period; user defined because of the way the script to generate my noise is set up
Ts=str2num(Ts{1}); %convert to a number
f= ((-length(fourierNoise)/2):((length(fourierNoise)/2)-1))*fs/length(fourierNoise);
%% interpolate calibration function
%this is going to be important because my calibration function is manually
%obtained so I will need to extrapolate *and* interpolate, unfortunately–I
%can’t just give it a smooth function due to the nature of the data
interpolatedCal=interp1(freqs(2:end-1),ampsRequired(2:end-1),f,’linear’,’extrap’);
%% calibrate
calibratedFourierNoise = interpolatedCal .* fourierNoise’;
calibratedNoise = ifft(ifftshift(calibratedFourierNoise));
if size(calibratedNoise)~=size(randNoise)
calibratedNoise = flip(calibratedNoise); %to fix row-column swaps–I know this needs a better fix
end
And
ismembertol(circshift(randNoise,[0, 1]),calibratedNoise,1e-15)
produces 1’s everywhere.
I think the line where I define my frequency:
f= ((-length(fourierNoise)/2):((length(fourierNoise)/2)-1))*fs/length(fourierNoise);
is the root of the problem but I’ve tried playing around with it and keep encountering the same issue.
In particular, if I do
[p,fp] = pspectrum(randNoise);
the maximum frequency it gives me is 3.14 for this data, and I’m assuming that the max frequency for the Fourier transform is 1/2 the number of random noise values per second–I feel like this is an issue with my lack of deep understanding of Fourier theory tbh, but I can’t pinpoint the exact issue.
Full script/test calibration file with ones everywhere (i.e. flat filter)/sample noise/sample calibration (with ones everywhere as used for the test in sampleCalOneEverywhere and as the filter I’ll aactually be using in sampleCal1) are attached.
(Not super relevant but the final filter will be an empirically-obtained low-pass filter with some added stuff.) fft, ifft, fourier transforms MATLAB Answers — New Questions
some manipulation -> IFFT results in one-element shift” />
Microsoft 365 Announcements at Ignite 2025
Day 1 Keynote Includes Many Announcements in a Very Long and Tiring Event
After 150 minutes of non-stop high-tech razzamatazz at the Chase Center in San Francisco (Figure 1), the keynote for the Microsoft Ignite 2025 conference finally ended, roughly an hour longer than it should have lasted. But it seemed that every high-profile Microsoft speaker except Satya Nadella wanted their moment on the stage to talk about Work IQ, agents, foundries, and getting stuff done with AI. Throw in frequent interactions with customers and softball questions to partners, and we ended up breathing a great sigh of relief when matters came to an end.

Two things were obvious from the large screens, scripted words (at one point, I could see four active auto-prompts), and the in-the-round seating (so much better than a cavernous room). First, Judson Althoff (CEO, Microsoft commercial business) is a more polished performer than Nadella. Second, Microsoft focused on explaining how AI solves real-world problems instead of talking about how AI would make work life more fulfilling. In other words, “we’re getting things done with AI” rather than “Copilot can generate some great stuff.” All of this is happening now in the so-called “Frontier firms,” aka companies who are willing to deploy AI now to become “human-led and AI-empowered.”
The change in emphasis was notable and perceivable through demonstrations like the six-minute order for 20,000 t-shirts initiated by Ryan Roslansky and fulfilled by the imaginary Zava company using Microsoft 365 Copilot (attendees could pick up a t-shirt after the keynote). In passing, Zava seems to have taken over from Contoso and Fabrikam in Microsoft demos. Overall, the keynote included just too many demos, most of which attracted desultory levels of applause.
Agent 365
Now available in the Microsoft 365 admin center, Agent 365 is the new admin experience (aka, a “control plane”) for AI agents. Opening the Agents section of the Microsoft 365 admin center reveals details of agent usage within the tenant. When I looked in my tenant, I was surprised to find so many agents listed (Figure 2). I’ve created a couple of agents, but nothing like the 148 reports. The answer lies in the Entra ID app registry, which includes agents published by Microsoft and third parties.

According to a session later on November 18, Purview will make its own contribution to the Agent 365 framework by implementing features like DLP checking for agent prompts (notified as MC1181998, updated 12 November 2025, and due for public preview later this month). The message is that Microsoft is dedicating a lot of effort to building out features that exist today to protect Microsoft 365 data to cover agents as well.
Security Copilot in Microsoft 365 E5
Microsoft also announced the bundling of Security Copilot in Microsoft 365 E5. The new capability is being rolled out now and should reach all E5 tenants over the next few months. Security Copilot measures its processing in Security Compute Units (SCUs), and tenants will receive “400 Security Compute Units (SCUs) per month for every 1,000 user licenses, up to 10,000 SCUs per month.” Further SCUs can be purchased at $6/each.
The SCUs don’t have to be used to analyze security incidents with Security Copilot. They could also be used with the Entra ID agents to process access reviews or conditional access policy optimization.
Microsoft Copilot Enhancements
Microsoft 365 attention was drawn to announcements like the additional functionality for Microsoft Copilot users (Microsoft 365 users without a full Microsoft 365 Copilot license) in an update that Microsoft plans to roll out in January 2026 to complete worldwide by late March 2026.
The change is documented in MC1187671 (18 November 2025), where Chat in Outlook will expand its ability to reason from a single email thread to a complete mailbox, and Copilot Chat gains the ability to create Word, Excel, and PowerPoint files from web data and any files they load into a chat. In addition, Word, Excel, and PowerPoint gain an agent mode to expand the ability of the apps to reason over web data (and the current file) to create content. The agent feature is only available if the tenant allows people to connect and use the Anthrophic Claude model. Hopefully, as more Microsoft 365 components consume the Claude LLM, Microsoft will address the shortcomings in how audit events capture details of how people use Claude.
Security a Big Downside for the Ignite 2025 Experience
I haven’t attended an Ignite conference in person since 2019. I’m not sure that San Francisco works as well as other venues do for large conferences. Microsoft imposed heavy security everywhere, probably to avoid the same issues that occurred at Build earlier this year. One protester stood up during the keynote to highlight the use of Azure in Gaza and was quickly removed by venue personnel. Later, a bunch of protesters were active for some hours outside the Muscone West building (Figure 3).

The security was an unpleasant side of the conference and resulted in large queues. My bag and identity documents were checked four times during the day (keynote, original arrival at Muscone South, going into Muscone West, and going back to Muscone South). While it was great to meet so many people again, the overall experience made me think that I’ll give Ignite a miss for another few years.
Learn how to exploit the data available to Microsoft 365 tenant administrators through the Office 365 for IT Pros eBook. We love figuring out how things work.
From idea to deployment: The complete lifecycle of AI on display at Ignite 2025
By now, most people would agree that AI is in the process of fundamentally changing how we work and solve problems. But this technology is still too often thought of as an addition to the work we do, rather than a fundamental part of it.
AI is not something that you can just plop on the end of a finished product, like a cherry on top of a sundae. Instead, using AI responsibly and wisely means thinking through how it can be used most effectively at every layer, from the datacenter that powers AI functionality to the people and organizations that are benefiting from its capabilities.
As we embark on another Microsoft Ignite, our company is empowering the complete lifecycle of AI, creating tools and solutions to drive the next generation of digital transformation for every organization and at every level of the work they do.
We envision a future where organizations become Frontier Firms by using AI for unlocking creativity and innovation, allowing the next great ideas to surface.
These are some of the major themes we are seeing with this year’s Ignite products and features:
AI in the flow of human ambition
At Microsoft, we believe that all great ideas start with human ambition, which can be accessed and unlocked using the capabilities in Microsoft 365 Copilot and an agent ecosystem.
Work IQ amplifies your IQ. It’s the intelligence layer that enables Microsoft 365 Copilot and agents to know how you work, with whom you work and the content you collaborate on. Built on your data, memory and inference, it connects to the rich company knowledge in your emails, files, meetings and chats, plus your preferences, habits, work patterns and relationships. It allows Copilot to make connections, unlock insights and predict the next best action based on native integrations, not a patchwork of third-party connectors. And now, you can tap into the expertise of Work IQ with APIs to build agents tuned to your unique workflows and business needs.
Work IQ also is powering many of the updates across Microsoft 365 Copilot announced at Ignite today.
Ubiquitous innovation and intelligence
In a Frontier Firm, there are makers in every room of the house. People on the frontlines are closest to the work problems that need to be solved. They can create agents to help them in their day-to-day work.
How do AI agents know what to do with your data? Foundry IQ and Fabric IQ help AI agents understand what users are doing, bridge the gap between raw data and real-world business meaning and find the context to make decisions.
Fabric IQ brings together analytical, time series and location-based data with your operational systems under one shared model tied to business meaning. This gives you a live, connected view of your business, so both people and AI can act in real time. If you are a customer who is already using Power BI for your business intelligence reporting, all of that pre-existing data modeling work will act as an immediate accelerant, giving your agents the unique context that defines how your business runs.
Foundry IQ takes this further with a fully managed knowledge system designed to ground AI agents over multiple data sources — including Microsoft 365 (Work IQ), Fabric IQ, custom applications and the web. This single endpoint for knowledge has routing and intelligence built in, enabling higher-quality reasoning, safer actions and more value for builders.
Microsoft Agent Factory is a program that brings these agent IQ layers together to help organizations build agents with confidence. With a single metered plan, customers can start building with IQ using Microsoft Foundry and Copilot Studio. They can deploy their agents anywhere, including Microsoft 365 Copilot, with no upfront licensing and provisioning required. Eligible organizations can also tap into hands-on support from top AI Forward Deployed Engineers and access tailored role-based training to boost AI fluency across teams.
Observability at every layer
By 2028, businesses are projected to have[1] 1.3 billion AI agents automating workflows. Most organizations don’t yet have a way to observe, secure or govern them — if not governed, AI agents are the new shadow IT.
Microsoft Agent 365 enables you to observe, manage and secure your AI agents, whether the agents are created with Microsoft platforms, open-source frameworks or third-party platforms.
It equips them with many of the same apps and protections as people, tailored to agent needs, saving IT time and effort on integrating agents into business processes. It includes the Microsoft security solutions Defender, Entra, Purview and Foundry Control Plane to protect and govern agents, productivity tools including Microsoft 365 apps and Work IQ to help people work more efficiently and Microsoft 365 admin center to manage agents.
This is only a small selection of the many exciting features and updates we will be announcing at Ignite. As a reminder, you can view keynote sessions from Microsoft executives, including Judson Althoff, Scott Guthrie, Charles Lamanna, Asha Sharma and Ryan Roslansky, live or on-demand.
Plus, you can get more on all these announcements by exploring the Book of News, the official compendium of all today’s news.
Frank X. Shaw is responsible for defining and managing communications strategies worldwide, company-wide storytelling, product PR, media and analyst relations, executive communications, employee communications, global agency management and military affairs.
Related:
Partners leading the AI transformation: Microsoft Ignite 2025 recap
[1] IDC Info Snapshot, sponsored by Microsoft, 1.3 Billion AI Agents by 2028, May 2025 #US53361825
The post From idea to deployment: The complete lifecycle of AI on display at Ignite 2025 appeared first on The Official Microsoft Blog.
By now, most people would agree that AI is in the process of fundamentally changing how we work and solve problems. But this technology is still too often thought of as an addition to the work we do, rather than a fundamental part of it. AI is not something that you can just plop on…
The post From idea to deployment: The complete lifecycle of AI on display at Ignite 2025 appeared first on The Official Microsoft Blog.Read More
Microsoft, NVIDIA and Anthropic announce strategic partnerships
Anthropic to scale Claude on Azure
Anthropic to adopt NVIDIA architecture
NVIDIA and Microsoft to invest in Anthropic
Today Microsoft, NVIDIA and Anthropic announced new strategic partnerships. Anthropic is scaling its rapidly-growing Claude AI model on Microsoft Azure, powered by NVIDIA, which will broaden access to Claude and provide Azure enterprise customers with expanded model choice and new capabilities. Anthropic has committed to purchase $30 billion of Azure compute capacity and to contract additional compute capacity up to one gigawatt.
For the first time, NVIDIA and Anthropic are establishing a deep technology partnership to support Anthropic’s future growth. Anthropic and NVIDIA will collaborate on design and engineering, with the goal of optimizing Anthropic models for the best possible performance, efficiency, and TCO, and optimizing future NVIDIA architectures for Anthropic workloads. Anthropic’s compute commitment will initially be up to one gigawatt of compute capacity with NVIDIA Grace Blackwell and Vera Rubin systems.
Microsoft and Anthropic are also expanding their existing partnership to provide broader access to Claude for businesses. Customers of Microsoft Foundry will be able to access Anthropic’s frontier Claude models including Claude Sonnet 4.5, Claude Opus 4.1, and Claude Haiku 4.5. This partnership will make Claude the only frontier model available on all three of the world’s most prominent cloud services. Azure customers will gain expanded choice in models and access to Claude-specific capabilities.
Microsoft has also committed to continuing access for Claude across Microsoft’s Copilot family, including GitHub Copilot, Microsoft 365 Copilot, and Copilot Studio.
As part of the partnership, NVIDIA and Microsoft are committing to invest up to $10 billion and up to $5 billion respectively in Anthropic.
Anthropic co-founder and CEO Dario Amodei, Microsoft Chairman and CEO Satya Nadella, and NVIDIA founder and CEO Jensen Huang gathered to discuss the new partnerships:
The post Microsoft, NVIDIA and Anthropic announce strategic partnerships appeared first on The Official Microsoft Blog.
Anthropic to scale Claude on Azure Anthropic to adopt NVIDIA architecture NVIDIA and Microsoft to invest in Anthropic Today Microsoft, NVIDIA and Anthropic announced new strategic partnerships. Anthropic is scaling its rapidly-growing Claude AI model on Microsoft Azure, powered by NVIDIA, which will broaden access to Claude and provide Azure enterprise customers with expanded model choice and new capabilities. Anthropic…
The post Microsoft, NVIDIA and Anthropic announce strategic partnerships appeared first on The Official Microsoft Blog.Read More
Why is matlab’s fopen so slow?
In our codebase, we want to log strings to a file. I use a very simple function for this:
function log(logstring)
fid = fopen("logging.log","A");
fwrite(fid,logstring);
fclose(fid);
end
Problem is that this is very slow (and I’m already using "A", as recommended for speed).
I also have pyton configured on my pc, which opens up the following alternative way to do the same thing:
function log_python(logstring)
filename = "logging.log";
code = ["with open(filename, ‘a’,encoding=’utf-8′,newline=”) as f:";
" f.write(data)"];
pyrun(code,data=logstring,filename=filename);
end
This method turns out to be about 10x faster than the matlab version. How is this possible?In our codebase, we want to log strings to a file. I use a very simple function for this:
function log(logstring)
fid = fopen("logging.log","A");
fwrite(fid,logstring);
fclose(fid);
end
Problem is that this is very slow (and I’m already using "A", as recommended for speed).
I also have pyton configured on my pc, which opens up the following alternative way to do the same thing:
function log_python(logstring)
filename = "logging.log";
code = ["with open(filename, ‘a’,encoding=’utf-8′,newline=”) as f:";
" f.write(data)"];
pyrun(code,data=logstring,filename=filename);
end
This method turns out to be about 10x faster than the matlab version. How is this possible? In our codebase, we want to log strings to a file. I use a very simple function for this:
function log(logstring)
fid = fopen("logging.log","A");
fwrite(fid,logstring);
fclose(fid);
end
Problem is that this is very slow (and I’m already using "A", as recommended for speed).
I also have pyton configured on my pc, which opens up the following alternative way to do the same thing:
function log_python(logstring)
filename = "logging.log";
code = ["with open(filename, ‘a’,encoding=’utf-8′,newline=”) as f:";
" f.write(data)"];
pyrun(code,data=logstring,filename=filename);
end
This method turns out to be about 10x faster than the matlab version. How is this possible? transferred MATLAB Answers — New Questions
Creating an event for “DrawRectangle” such as right click on mouse
Hello, i am drawing 2 rectangles on an image
ax=app.UIAxes; % Axes that image is on
numROIs = 2;
roiPos = zeros(numROIs,4);
for cnt = 1:numROIs
hrect = drawrectangle(ax);
roiPos(cnt,:) = hrect.Position;
cnt
end
I want to be able to perform a calculation (e.g standard deviation) of each region by a right click or something else rather than a ROImoving or ROImoved event. (the reason is I dont want the calcs firing off whilst moving or e.g one rectangle is in position (stopped moving) but the 2nd rectangle still needs to be positioned)
Any pointers, I see the below but I can’t find a right mouse click event or some other suggestion
addlistener(roi,’MovingROI’,@allevents);
addlistener(roi,’ROIMoved’,@allevents);
function allevents(src,evt)
evname = evt.EventName;
switch(evname)
case{‘MovingROI’}
disp([‘ROI moving previous position: ‘ mat2str(evt.PreviousPosition)]);
disp([‘ROI moving current position: ‘ mat2str(evt.CurrentPosition)]);
case{‘ROIMoved’}
disp([‘ROI moved previous position: ‘ mat2str(evt.PreviousPosition)]);
disp([‘ROI moved current position: ‘ mat2str(evt.CurrentPosition)]);
end
endHello, i am drawing 2 rectangles on an image
ax=app.UIAxes; % Axes that image is on
numROIs = 2;
roiPos = zeros(numROIs,4);
for cnt = 1:numROIs
hrect = drawrectangle(ax);
roiPos(cnt,:) = hrect.Position;
cnt
end
I want to be able to perform a calculation (e.g standard deviation) of each region by a right click or something else rather than a ROImoving or ROImoved event. (the reason is I dont want the calcs firing off whilst moving or e.g one rectangle is in position (stopped moving) but the 2nd rectangle still needs to be positioned)
Any pointers, I see the below but I can’t find a right mouse click event or some other suggestion
addlistener(roi,’MovingROI’,@allevents);
addlistener(roi,’ROIMoved’,@allevents);
function allevents(src,evt)
evname = evt.EventName;
switch(evname)
case{‘MovingROI’}
disp([‘ROI moving previous position: ‘ mat2str(evt.PreviousPosition)]);
disp([‘ROI moving current position: ‘ mat2str(evt.CurrentPosition)]);
case{‘ROIMoved’}
disp([‘ROI moved previous position: ‘ mat2str(evt.PreviousPosition)]);
disp([‘ROI moved current position: ‘ mat2str(evt.CurrentPosition)]);
end
end Hello, i am drawing 2 rectangles on an image
ax=app.UIAxes; % Axes that image is on
numROIs = 2;
roiPos = zeros(numROIs,4);
for cnt = 1:numROIs
hrect = drawrectangle(ax);
roiPos(cnt,:) = hrect.Position;
cnt
end
I want to be able to perform a calculation (e.g standard deviation) of each region by a right click or something else rather than a ROImoving or ROImoved event. (the reason is I dont want the calcs firing off whilst moving or e.g one rectangle is in position (stopped moving) but the 2nd rectangle still needs to be positioned)
Any pointers, I see the below but I can’t find a right mouse click event or some other suggestion
addlistener(roi,’MovingROI’,@allevents);
addlistener(roi,’ROIMoved’,@allevents);
function allevents(src,evt)
evname = evt.EventName;
switch(evname)
case{‘MovingROI’}
disp([‘ROI moving previous position: ‘ mat2str(evt.PreviousPosition)]);
disp([‘ROI moving current position: ‘ mat2str(evt.CurrentPosition)]);
case{‘ROIMoved’}
disp([‘ROI moved previous position: ‘ mat2str(evt.PreviousPosition)]);
disp([‘ROI moved current position: ‘ mat2str(evt.CurrentPosition)]);
end
end drawrectangle, events, image, addlistener MATLAB Answers — New Questions
Why did MATLAB load a function from an unexpected m-file?
I’m running MATLAB 2018a. (Yeah, I know. Corporate IT.)
I have an m-file that pulls in functions from several other m-files.
Yesterday, I ran into a problem because the function that was pulled in and ran was not the function I wanted or expected.
Specifically, my main function runs a function called instrRack. This function is defined inside an m-file named instrRack.m.
I expected to get the instrRack function from the file C:gitunified-cryounifiedInstruments@instrRackinstrRack.m.
Instead, the instrRack function came from the file C:gitunified-cryoadrInstruments@instrRackinstrRack.m
I discovered this by digging through the error stack returned by the use of the incorrect instrRack function.
Note that the correct file is "unified-cryounified" while the incorrect file is "unified-cryoadr".
I don’t believe I’ve seen this problem before and I’ve been using my main function for months, if not years.
As far as I could tell, my MATLAB path was correct. Specifically, C:gitunified-cryounified was at the top of the path, with C:gitunified-cryounifiedInstruments a few entries below that.
Unless I missed something, C:gitunified-cryoadr was nowhere in the MATLAB path.
Other people use MATLAB on the same test stand. Could one of them have used a path that included the incorrect m-file, and that incorrect m-file or path somehow persisted into my MATLAB session?
I tried killing and restarting my MATLAB session several times, each time verifying that my MATLAB path was correct. However, I kept getting the function from the wrong m-file.
I wound up having to restart the PC in order to get the function from the correct m-file.
Any suggestions on why I kept getting the function from the wrong m-file?I’m running MATLAB 2018a. (Yeah, I know. Corporate IT.)
I have an m-file that pulls in functions from several other m-files.
Yesterday, I ran into a problem because the function that was pulled in and ran was not the function I wanted or expected.
Specifically, my main function runs a function called instrRack. This function is defined inside an m-file named instrRack.m.
I expected to get the instrRack function from the file C:gitunified-cryounifiedInstruments@instrRackinstrRack.m.
Instead, the instrRack function came from the file C:gitunified-cryoadrInstruments@instrRackinstrRack.m
I discovered this by digging through the error stack returned by the use of the incorrect instrRack function.
Note that the correct file is "unified-cryounified" while the incorrect file is "unified-cryoadr".
I don’t believe I’ve seen this problem before and I’ve been using my main function for months, if not years.
As far as I could tell, my MATLAB path was correct. Specifically, C:gitunified-cryounified was at the top of the path, with C:gitunified-cryounifiedInstruments a few entries below that.
Unless I missed something, C:gitunified-cryoadr was nowhere in the MATLAB path.
Other people use MATLAB on the same test stand. Could one of them have used a path that included the incorrect m-file, and that incorrect m-file or path somehow persisted into my MATLAB session?
I tried killing and restarting my MATLAB session several times, each time verifying that my MATLAB path was correct. However, I kept getting the function from the wrong m-file.
I wound up having to restart the PC in order to get the function from the correct m-file.
Any suggestions on why I kept getting the function from the wrong m-file? I’m running MATLAB 2018a. (Yeah, I know. Corporate IT.)
I have an m-file that pulls in functions from several other m-files.
Yesterday, I ran into a problem because the function that was pulled in and ran was not the function I wanted or expected.
Specifically, my main function runs a function called instrRack. This function is defined inside an m-file named instrRack.m.
I expected to get the instrRack function from the file C:gitunified-cryounifiedInstruments@instrRackinstrRack.m.
Instead, the instrRack function came from the file C:gitunified-cryoadrInstruments@instrRackinstrRack.m
I discovered this by digging through the error stack returned by the use of the incorrect instrRack function.
Note that the correct file is "unified-cryounified" while the incorrect file is "unified-cryoadr".
I don’t believe I’ve seen this problem before and I’ve been using my main function for months, if not years.
As far as I could tell, my MATLAB path was correct. Specifically, C:gitunified-cryounified was at the top of the path, with C:gitunified-cryounifiedInstruments a few entries below that.
Unless I missed something, C:gitunified-cryoadr was nowhere in the MATLAB path.
Other people use MATLAB on the same test stand. Could one of them have used a path that included the incorrect m-file, and that incorrect m-file or path somehow persisted into my MATLAB session?
I tried killing and restarting my MATLAB session several times, each time verifying that my MATLAB path was correct. However, I kept getting the function from the wrong m-file.
I wound up having to restart the PC in order to get the function from the correct m-file.
Any suggestions on why I kept getting the function from the wrong m-file? wrong m-file MATLAB Answers — New Questions
Finding peaks and valleys, and associated indexes, of time-shifted noisy sinosidal waves
I want to automate a calibration process, but I’m really struggling with the code.
I have a signal that I give my instrument (second column of attached sample data), and then two probes which record the instrument’s response, independently of each other (last two columns of attached sample data). The calibration signal is a series of sine waves of varying amplitudes and periods, see below:
I need to find the following for my calibrations:
-The signal value at each peak and trough
-Whether each probe produces sinosidal waves at each period/amplitude combination
-If so, the response value for each probe at its peaks and troughs (both as individual datapoints and as the mean value) and ideally the corresponding periods and amplitudes
The first is quite easy to do and the second can be done manually without too much trouble. Where I’m running into trouble with is the third step.
I tried the findpeaks function and it’s quite noisy:
I did attempt to use the MinPeakHeight argument, use a high-pass filter, and smooth my probe data using moving averages, but none of those attempts worked very well. I also tried to use a Button Down Callback Function that would store values and indexes, but that ended up being super prone to user error in terms of the exact point to click on and not very fast. Solutions like this one don’t work because the two sinusoidal response aren’t quite fully phase shifted, they’re time-shifted by an amount of time that’s not consistent either across probes or over time.
Does anyone have any suggestions for functions to look into which might help me do what I want? I’m quite stuck.I want to automate a calibration process, but I’m really struggling with the code.
I have a signal that I give my instrument (second column of attached sample data), and then two probes which record the instrument’s response, independently of each other (last two columns of attached sample data). The calibration signal is a series of sine waves of varying amplitudes and periods, see below:
I need to find the following for my calibrations:
-The signal value at each peak and trough
-Whether each probe produces sinosidal waves at each period/amplitude combination
-If so, the response value for each probe at its peaks and troughs (both as individual datapoints and as the mean value) and ideally the corresponding periods and amplitudes
The first is quite easy to do and the second can be done manually without too much trouble. Where I’m running into trouble with is the third step.
I tried the findpeaks function and it’s quite noisy:
I did attempt to use the MinPeakHeight argument, use a high-pass filter, and smooth my probe data using moving averages, but none of those attempts worked very well. I also tried to use a Button Down Callback Function that would store values and indexes, but that ended up being super prone to user error in terms of the exact point to click on and not very fast. Solutions like this one don’t work because the two sinusoidal response aren’t quite fully phase shifted, they’re time-shifted by an amount of time that’s not consistent either across probes or over time.
Does anyone have any suggestions for functions to look into which might help me do what I want? I’m quite stuck. I want to automate a calibration process, but I’m really struggling with the code.
I have a signal that I give my instrument (second column of attached sample data), and then two probes which record the instrument’s response, independently of each other (last two columns of attached sample data). The calibration signal is a series of sine waves of varying amplitudes and periods, see below:
I need to find the following for my calibrations:
-The signal value at each peak and trough
-Whether each probe produces sinosidal waves at each period/amplitude combination
-If so, the response value for each probe at its peaks and troughs (both as individual datapoints and as the mean value) and ideally the corresponding periods and amplitudes
The first is quite easy to do and the second can be done manually without too much trouble. Where I’m running into trouble with is the third step.
I tried the findpeaks function and it’s quite noisy:
I did attempt to use the MinPeakHeight argument, use a high-pass filter, and smooth my probe data using moving averages, but none of those attempts worked very well. I also tried to use a Button Down Callback Function that would store values and indexes, but that ended up being super prone to user error in terms of the exact point to click on and not very fast. Solutions like this one don’t work because the two sinusoidal response aren’t quite fully phase shifted, they’re time-shifted by an amount of time that’s not consistent either across probes or over time.
Does anyone have any suggestions for functions to look into which might help me do what I want? I’m quite stuck. signal processing, time series MATLAB Answers — New Questions
Best approach to develop a GUI tool that integrates Simulink, code generation, testing, and documentation workflows
I’m planning to develop a custom tool—specifically a GUI—that orchestrates the development workflow in MATLAB/Simulink. This includes integration with Simulink models, code generation, document generation, testing, static analysis, and other related tasks. I’m aware that MATLAB provides toolboxes for each of these areas, but my goal is to streamline the process and provide a unified interface to control and coordinate the entire flow.
What would be the best approach to implement such a tool?
Should I build the GUI entirely using MATLAB (e.g., App Designer or programmatic UI components), or would it be better to leverage Java, considering that MATLAB supports Java integration?
Any guidance or examples from similar projects would be greatly appreciated.I’m planning to develop a custom tool—specifically a GUI—that orchestrates the development workflow in MATLAB/Simulink. This includes integration with Simulink models, code generation, document generation, testing, static analysis, and other related tasks. I’m aware that MATLAB provides toolboxes for each of these areas, but my goal is to streamline the process and provide a unified interface to control and coordinate the entire flow.
What would be the best approach to implement such a tool?
Should I build the GUI entirely using MATLAB (e.g., App Designer or programmatic UI components), or would it be better to leverage Java, considering that MATLAB supports Java integration?
Any guidance or examples from similar projects would be greatly appreciated. I’m planning to develop a custom tool—specifically a GUI—that orchestrates the development workflow in MATLAB/Simulink. This includes integration with Simulink models, code generation, document generation, testing, static analysis, and other related tasks. I’m aware that MATLAB provides toolboxes for each of these areas, but my goal is to streamline the process and provide a unified interface to control and coordinate the entire flow.
What would be the best approach to implement such a tool?
Should I build the GUI entirely using MATLAB (e.g., App Designer or programmatic UI components), or would it be better to leverage Java, considering that MATLAB supports Java integration?
Any guidance or examples from similar projects would be greatly appreciated. gui, development process MATLAB Answers — New Questions









