Month: August 2025
September 2025 Update for Automating Microsoft 365 with PowerShell
Update 15 Available for Download

The Office 365 for IT Pros eBook team is happy to announce the availability of the September update for the Automating Microsoft 365 with PowerShell eBook.The ebook is available as part of the Office 365 for IT Pros eBook bundle or as a separate subscription. Those with current subscriptions for either the bundle or separate book can download the updated PDF and EPUB files now.
For those who like printed text, a paperback version is available through Amazon.com print on demand. The version number for the update is 15.1.
Microsoft Graph PowerShell SDK V2.30
Microsoft released V2.30 of the Microsoft Graph PowerShell SDK on August 19. I’ve been using the new version, and it hasn’t thrown up any problems, including running in Azure Automation. In truth, there’s nothing very different in V2.30 apart from some fixed bugs and support for recently released Graph APIs.
Following several disastrous releases, stability and reliability are the two most important attributes the Microsoft Graph PowerShell SDK can exhibit. Engineering responsibility for the SDK has moved to a new group, and the hope is that the new team will deliver a series of high-quality releases. Time will tell.
Microsoft also released V1.0.11 of the Entra PowerShell module on August 22. The Entra module is based on the Microsoft Graph PowerShell SDK but only deals with Entra objects and configurations. I must admit to paying little attention to the Entra module because I prefer working with the full SDK, but I can see how those who are accustomed to working with the old AzureAD module will find the Entra module easier to get up to speed with.
Microsoft Teams V7.31
This month saw Microsoft release V7.30 of the Teams PowerShell module on August 11, 2025 followed ten days later with V7.31. Apparently, a bug was found in education tenants that affected the New-Team cmdlet. Being able to create new teams programmatically is kind of important, so Microsoft rushed out V7.31.
Going back to the Microsoft Graph PowerShell SDK, the New-MgTeam cmdlet is an alternative way to create a new team. Microsoft Graph coverage for Teams includes most administrative operations involving teams, channels, messages, chats, calls, apps, and members. Where Graph coverage exists, there’s a matching Microsoft Graph PowerShell SDK cmdlet. What missing in the Graph is coverage for Teams policies, like meeting policies. Many of these policies came from the old Skype for Business Online connector (absorbed into the Teams PowerShell module in 2021).
While understandable that the initial need was to integrate the old Skype for Business Online policies into the Teams PowerShell module, it’s curious that Microsoft hasn’t progressed to deliver full Graph coverage for all aspects of the Teams ecosystem.
A Bad Decision for Connect-IPPSSession
I cannot understand the logic behind the announcement in MC1131771 (updated 15 August 2025) that the Connect-IPPSSession cmdlet will require the EnableSearchOnlySession parameter to run eDiscovery cmdlets like New-ComplianceSearchAction. Our technical editor, Vasil Michev, published a nice behind-the-scenes analysis of the change on his blog.
The change is due to come into effect on August 31, 2025, and tenants must use V3.9 or later of the Exchange Online management module (released on 12 August 2025). I think the change is required by the changeover to the new eDiscovery framework and the retirement of the older Purview eDiscovery (premium) offering.
The cmdlet documentation says that the switch enables “certain eDiscovery and related cmdlets that connect to other Microsoft 365 services.” Changes like this have a nasty habit of breaking production scripts. I am sure that Microsoft could have done the work to detect when a connection to other services is necessary and do whatever is necessary without imposing the need to change on customers. In other words, make sure that customers see magic and never expose the dirty pipework that makes everything work.
On to TEC 2025
I’m looking forward to speaking about why Azure Automation works great with Microsoft 365 PowerShell at The Experts Conference (TEC) event in Minneapolis (September 30-October 1). TEC is a relatively small event, so great interaction happens between speakers and attendees. Even the heckling is of high quality. The PowerShell script-off competition, where participants are challenged to come up with scripted solutions to real-life questions, is going ahead again and it’s always good fun (the beer consumed by the audience might add to the general air of hilarity).
Fellow Office 365 for IT Pros authors Paul Robichaux and Michel de Rooij will also speak at TEC. If you’d like to attend, here’s a code with a nice discount. Come along and tell us what you like (and don’t) about Office 365 for IT Pros. And if you have a printed copy of Automating Microsoft 365 with PowerShell, we’ll be happy to sign it for you.
Need some assistance to write and manage PowerShell scripts for Microsoft 365? Get a copy of the Automating Microsoft 365 with PowerShell eBook, available standalone or as part of the Office 365 for IT Pros eBook bundle.
making a custom way to train CNNs, and I am noticing that avgpool is SIGNIFICANTLY faster than maxpool in forward and backwards passes…
I’m designing a custom training procedure for a CNN that is different from backpropagation in that I use manual update rules for layers or sets of layers. I’m studying my gradient for two types of layers: “conv + actfun + maxpool”, and “conv + actfun + avgpool”, which are identical layers except the last action is a different pooling type.
I compared the two layer types with identical data dimension sizes to see the time differences between maxpool and avgpool, both in the forward pass and the backwards pass of the pooling layers. All other steps in calculating the gradient were exactly the same between the two layers, and showed the same time costs in the two layers. But when looking at time costs specifically of the pooling operations’ forward and backwards passes, I get significantly different times (average of 5000 runs of the gradient, each measurement is in milliseconds):
gradient step | AvgPool | MaxPool | Difference
————————–|———|———|———-
pooling (forward pass) | 0.4165 | 38.6316 | +38.2151
unpooling (backward pass) | 9.9468 | 46.1667 | +36.2199
For reference, all my data arrays are dlarrays on the GPU (gpuArrays in dlarrays), all single precision, and the pooling operations convert 32 by 32 feature maps (across 2 channels and 16384 batch size) to 16 by 16 feature maps (of same # channels and batch size), so just a 2 by 2 pooling operation.
You can see here that the maxpool forward pass (using “maxpool” function) is about 92 times slower than the avgpool forward pass (using “avgpool”), and the maxpool backward pass (using “maxunpool”) is about 4.6 times slower than the avgpool backward pass (using a custom “avgunpool” function that Anthropic’s Claude had to create for me, since matlab has no “avgunpool”).
These results are extremely suspect to me. For the forwards pass, comparing matlab’s built in "maxpool" to built in "avgpool" functions gives a 92x difference, but searching online people seem to instead claim that training max pooling is supposed to be faster than avg pooling, which contradicts the results here.
For simplicity, see the code example below that runs just "maxpool" and "avgpool" only (no other functions) and compares their times:
function analyze_pooling_timing()
% GPU setup
g = gpuDevice();
fprintf(‘GPU: %sn’, g.Name);
% Parameters matching your test
H_in = 32; W_in = 32; C_in = 3; C_out = 2;
N = 16384; % batch size. Try N = 32 small or N = 16384 big
kH = 3; kW = 3;
pool_params.pool_size = [2, 2];
pool_params.pool_stride = [2, 2];
pool_params.pool_padding = 0;
conv_params.stride = [1, 1];
conv_params.padding = ‘same’;
conv_params.dilation = [1, 1];
% Initialize data
Wj = dlarray(gpuArray(single(randn(kH, kW, C_in, C_out) * 0.01)), ‘SSCU’);
Bj = dlarray(gpuArray(single(zeros(C_out, 1))), ‘C’);
Fjmin1 = dlarray(gpuArray(single(randn(H_in, W_in, C_in, N))), ‘SSCB’);
% Number of iterations for averaging
num_iter = 100;
fprintf(‘Running %d iterations for each timing measurement…nn’, num_iter);
%% setup everything in forward pass before the pooling:
% Forward convolution
Sj = dlconv(Fjmin1, Wj, Bj, …
‘Stride’, conv_params.stride, …
‘Padding’, conv_params.padding, …
‘DilationFactor’, conv_params.dilation);
% activation function (and derivative)
Oj = max(Sj, 0); Fprimej = sign(Oj);
%% Time AVERAGE POOLING
fprintf(‘=== AVERAGE POOLING (conv_af_ap) ===n’);
times_ap = struct();
for iter = 1:num_iter
% Average pooling
tic;
Oj_pooled = avgpool(Oj, pool_params.pool_size, …
‘Stride’, pool_params.pool_stride, …
‘Padding’, pool_params.pool_padding);
wait(g);
times_ap.pooling(iter) = toc;
end
%% Time MAX POOLING
fprintf(‘n=== MAX POOLING (conv_af_mp) ===n’);
times_mp = struct();
for iter = 1:num_iter
% Max pooling with indices
tic;
[Oj_pooled, max_indices] = maxpool(Oj, pool_params.pool_size, …
‘Stride’, pool_params.pool_stride, …
‘Padding’, pool_params.pool_padding);
wait(g);
times_mp.pooling(iter) = toc;
end
%% Compute statistics and display results
fprintf(‘n=== TIMING RESULTS (milliseconds) ===n’);
fprintf(‘%-25s %12s %12s %12sn’, ‘Step’, ‘AvgPool’, ‘MaxPool’, ‘Difference’);
fprintf(‘%sn’, repmat(‘-‘, 1, 65));
steps_common = { ‘pooling’};
total_ap = 0;
total_mp = 0;
for i = 1:length(steps_common)
step = steps_common{i};
if isfield(times_ap, step) && isfield(times_mp, step)
mean_ap = mean(times_ap.(step)) * 1000; % times 1000 to convert seconds to milliseconds
mean_mp = mean(times_mp.(step)) * 1000; % times 1000 to convert seconds to milliseconds
total_ap = total_ap + mean_ap;
total_mp = total_mp + mean_mp;
diff = mean_mp – mean_ap;
fprintf(‘%-25s %12.4f %12.4f %+12.4fn’, step, mean_ap, mean_mp, diff);
end
end
fprintf(‘%sn’, repmat(‘-‘, 1, 65));
%fprintf(‘%-25s %12.4f %12.4f %+12.4fn’, ‘TOTAL’, total_ap, total_mp, total_mp – total_ap);
fprintf(‘%-25s %12s %12s %12.2fxn’, ‘Speedup’, ”, ”, total_mp/total_ap);
end
Now you can see one main difference between my calling of maxpool and avgpool: for avgpool I only have 1 output (the pooled values), but with maxpool I have 2 outputs (the pooled values, and the index locations of these max values).
This is important because if I replaced the call to maxpool with only requesting the pooled values (1 output), maxpool is faster as expected:
>> analyze_pooling_timing
GPU: NVIDIA GeForce RTX 5080
Running 1000 iterations for each timing measurement…
=== AVERAGE POOLING (conv_af_ap) ===
=== MAX POOLING (conv_af_mp) ===
=== TIMING RESULTS (milliseconds) ===
Step AvgPool MaxPool Difference
—————————————————————–
pooling 0.4358 0.2330 -0.2028
—————————————————————–
max/avg 0.53x
>>
However if I call maxpool like here with 2 outputs (pooled values and indices), or with 3 outputs (pooled values, indices, and inputSize), this is where maxpool is extremely slow:
>> analyze_pooling_timing
GPU: NVIDIA GeForce RTX 5080
Running 1000 iterations for each timing measurement…
=== AVERAGE POOLING (conv_af_ap) ===
=== MAX POOLING (conv_af_mp) ===
=== TIMING RESULTS (milliseconds) ===
Step AvgPool MaxPool Difference
—————————————————————–
pooling 0.4153 38.9818 +38.5665
—————————————————————–
max/avg 93.86x
>>
So it appears that this is the reason why maxpool is so much slower: requesting the maxpool function to also output the indices leads to a massive slowdown.
Unfortunately, the indices are needed to later differentiate (backwards pass) the maxpool layer… so I need the indices…
I’d assume that whenever someone wants to train a CNN in matlab using a maxpool layer, they would have to call maxpool with indices, and thus I’d expect a similar slowdown…
Can anyone here comment on this? My application needs me to train maxpool layers, so I need to both forward pass and backwards pass through them, thus I need these indices. But it appears that matlab’s version of maxpool may not be the best implementation of a maxpool operation… maybe some inefficient processes regarding obtaining the indices?I’m designing a custom training procedure for a CNN that is different from backpropagation in that I use manual update rules for layers or sets of layers. I’m studying my gradient for two types of layers: “conv + actfun + maxpool”, and “conv + actfun + avgpool”, which are identical layers except the last action is a different pooling type.
I compared the two layer types with identical data dimension sizes to see the time differences between maxpool and avgpool, both in the forward pass and the backwards pass of the pooling layers. All other steps in calculating the gradient were exactly the same between the two layers, and showed the same time costs in the two layers. But when looking at time costs specifically of the pooling operations’ forward and backwards passes, I get significantly different times (average of 5000 runs of the gradient, each measurement is in milliseconds):
gradient step | AvgPool | MaxPool | Difference
————————–|———|———|———-
pooling (forward pass) | 0.4165 | 38.6316 | +38.2151
unpooling (backward pass) | 9.9468 | 46.1667 | +36.2199
For reference, all my data arrays are dlarrays on the GPU (gpuArrays in dlarrays), all single precision, and the pooling operations convert 32 by 32 feature maps (across 2 channels and 16384 batch size) to 16 by 16 feature maps (of same # channels and batch size), so just a 2 by 2 pooling operation.
You can see here that the maxpool forward pass (using “maxpool” function) is about 92 times slower than the avgpool forward pass (using “avgpool”), and the maxpool backward pass (using “maxunpool”) is about 4.6 times slower than the avgpool backward pass (using a custom “avgunpool” function that Anthropic’s Claude had to create for me, since matlab has no “avgunpool”).
These results are extremely suspect to me. For the forwards pass, comparing matlab’s built in "maxpool" to built in "avgpool" functions gives a 92x difference, but searching online people seem to instead claim that training max pooling is supposed to be faster than avg pooling, which contradicts the results here.
For simplicity, see the code example below that runs just "maxpool" and "avgpool" only (no other functions) and compares their times:
function analyze_pooling_timing()
% GPU setup
g = gpuDevice();
fprintf(‘GPU: %sn’, g.Name);
% Parameters matching your test
H_in = 32; W_in = 32; C_in = 3; C_out = 2;
N = 16384; % batch size. Try N = 32 small or N = 16384 big
kH = 3; kW = 3;
pool_params.pool_size = [2, 2];
pool_params.pool_stride = [2, 2];
pool_params.pool_padding = 0;
conv_params.stride = [1, 1];
conv_params.padding = ‘same’;
conv_params.dilation = [1, 1];
% Initialize data
Wj = dlarray(gpuArray(single(randn(kH, kW, C_in, C_out) * 0.01)), ‘SSCU’);
Bj = dlarray(gpuArray(single(zeros(C_out, 1))), ‘C’);
Fjmin1 = dlarray(gpuArray(single(randn(H_in, W_in, C_in, N))), ‘SSCB’);
% Number of iterations for averaging
num_iter = 100;
fprintf(‘Running %d iterations for each timing measurement…nn’, num_iter);
%% setup everything in forward pass before the pooling:
% Forward convolution
Sj = dlconv(Fjmin1, Wj, Bj, …
‘Stride’, conv_params.stride, …
‘Padding’, conv_params.padding, …
‘DilationFactor’, conv_params.dilation);
% activation function (and derivative)
Oj = max(Sj, 0); Fprimej = sign(Oj);
%% Time AVERAGE POOLING
fprintf(‘=== AVERAGE POOLING (conv_af_ap) ===n’);
times_ap = struct();
for iter = 1:num_iter
% Average pooling
tic;
Oj_pooled = avgpool(Oj, pool_params.pool_size, …
‘Stride’, pool_params.pool_stride, …
‘Padding’, pool_params.pool_padding);
wait(g);
times_ap.pooling(iter) = toc;
end
%% Time MAX POOLING
fprintf(‘n=== MAX POOLING (conv_af_mp) ===n’);
times_mp = struct();
for iter = 1:num_iter
% Max pooling with indices
tic;
[Oj_pooled, max_indices] = maxpool(Oj, pool_params.pool_size, …
‘Stride’, pool_params.pool_stride, …
‘Padding’, pool_params.pool_padding);
wait(g);
times_mp.pooling(iter) = toc;
end
%% Compute statistics and display results
fprintf(‘n=== TIMING RESULTS (milliseconds) ===n’);
fprintf(‘%-25s %12s %12s %12sn’, ‘Step’, ‘AvgPool’, ‘MaxPool’, ‘Difference’);
fprintf(‘%sn’, repmat(‘-‘, 1, 65));
steps_common = { ‘pooling’};
total_ap = 0;
total_mp = 0;
for i = 1:length(steps_common)
step = steps_common{i};
if isfield(times_ap, step) && isfield(times_mp, step)
mean_ap = mean(times_ap.(step)) * 1000; % times 1000 to convert seconds to milliseconds
mean_mp = mean(times_mp.(step)) * 1000; % times 1000 to convert seconds to milliseconds
total_ap = total_ap + mean_ap;
total_mp = total_mp + mean_mp;
diff = mean_mp – mean_ap;
fprintf(‘%-25s %12.4f %12.4f %+12.4fn’, step, mean_ap, mean_mp, diff);
end
end
fprintf(‘%sn’, repmat(‘-‘, 1, 65));
%fprintf(‘%-25s %12.4f %12.4f %+12.4fn’, ‘TOTAL’, total_ap, total_mp, total_mp – total_ap);
fprintf(‘%-25s %12s %12s %12.2fxn’, ‘Speedup’, ”, ”, total_mp/total_ap);
end
Now you can see one main difference between my calling of maxpool and avgpool: for avgpool I only have 1 output (the pooled values), but with maxpool I have 2 outputs (the pooled values, and the index locations of these max values).
This is important because if I replaced the call to maxpool with only requesting the pooled values (1 output), maxpool is faster as expected:
>> analyze_pooling_timing
GPU: NVIDIA GeForce RTX 5080
Running 1000 iterations for each timing measurement…
=== AVERAGE POOLING (conv_af_ap) ===
=== MAX POOLING (conv_af_mp) ===
=== TIMING RESULTS (milliseconds) ===
Step AvgPool MaxPool Difference
—————————————————————–
pooling 0.4358 0.2330 -0.2028
—————————————————————–
max/avg 0.53x
>>
However if I call maxpool like here with 2 outputs (pooled values and indices), or with 3 outputs (pooled values, indices, and inputSize), this is where maxpool is extremely slow:
>> analyze_pooling_timing
GPU: NVIDIA GeForce RTX 5080
Running 1000 iterations for each timing measurement…
=== AVERAGE POOLING (conv_af_ap) ===
=== MAX POOLING (conv_af_mp) ===
=== TIMING RESULTS (milliseconds) ===
Step AvgPool MaxPool Difference
—————————————————————–
pooling 0.4153 38.9818 +38.5665
—————————————————————–
max/avg 93.86x
>>
So it appears that this is the reason why maxpool is so much slower: requesting the maxpool function to also output the indices leads to a massive slowdown.
Unfortunately, the indices are needed to later differentiate (backwards pass) the maxpool layer… so I need the indices…
I’d assume that whenever someone wants to train a CNN in matlab using a maxpool layer, they would have to call maxpool with indices, and thus I’d expect a similar slowdown…
Can anyone here comment on this? My application needs me to train maxpool layers, so I need to both forward pass and backwards pass through them, thus I need these indices. But it appears that matlab’s version of maxpool may not be the best implementation of a maxpool operation… maybe some inefficient processes regarding obtaining the indices? I’m designing a custom training procedure for a CNN that is different from backpropagation in that I use manual update rules for layers or sets of layers. I’m studying my gradient for two types of layers: “conv + actfun + maxpool”, and “conv + actfun + avgpool”, which are identical layers except the last action is a different pooling type.
I compared the two layer types with identical data dimension sizes to see the time differences between maxpool and avgpool, both in the forward pass and the backwards pass of the pooling layers. All other steps in calculating the gradient were exactly the same between the two layers, and showed the same time costs in the two layers. But when looking at time costs specifically of the pooling operations’ forward and backwards passes, I get significantly different times (average of 5000 runs of the gradient, each measurement is in milliseconds):
gradient step | AvgPool | MaxPool | Difference
————————–|———|———|———-
pooling (forward pass) | 0.4165 | 38.6316 | +38.2151
unpooling (backward pass) | 9.9468 | 46.1667 | +36.2199
For reference, all my data arrays are dlarrays on the GPU (gpuArrays in dlarrays), all single precision, and the pooling operations convert 32 by 32 feature maps (across 2 channels and 16384 batch size) to 16 by 16 feature maps (of same # channels and batch size), so just a 2 by 2 pooling operation.
You can see here that the maxpool forward pass (using “maxpool” function) is about 92 times slower than the avgpool forward pass (using “avgpool”), and the maxpool backward pass (using “maxunpool”) is about 4.6 times slower than the avgpool backward pass (using a custom “avgunpool” function that Anthropic’s Claude had to create for me, since matlab has no “avgunpool”).
These results are extremely suspect to me. For the forwards pass, comparing matlab’s built in "maxpool" to built in "avgpool" functions gives a 92x difference, but searching online people seem to instead claim that training max pooling is supposed to be faster than avg pooling, which contradicts the results here.
For simplicity, see the code example below that runs just "maxpool" and "avgpool" only (no other functions) and compares their times:
function analyze_pooling_timing()
% GPU setup
g = gpuDevice();
fprintf(‘GPU: %sn’, g.Name);
% Parameters matching your test
H_in = 32; W_in = 32; C_in = 3; C_out = 2;
N = 16384; % batch size. Try N = 32 small or N = 16384 big
kH = 3; kW = 3;
pool_params.pool_size = [2, 2];
pool_params.pool_stride = [2, 2];
pool_params.pool_padding = 0;
conv_params.stride = [1, 1];
conv_params.padding = ‘same’;
conv_params.dilation = [1, 1];
% Initialize data
Wj = dlarray(gpuArray(single(randn(kH, kW, C_in, C_out) * 0.01)), ‘SSCU’);
Bj = dlarray(gpuArray(single(zeros(C_out, 1))), ‘C’);
Fjmin1 = dlarray(gpuArray(single(randn(H_in, W_in, C_in, N))), ‘SSCB’);
% Number of iterations for averaging
num_iter = 100;
fprintf(‘Running %d iterations for each timing measurement…nn’, num_iter);
%% setup everything in forward pass before the pooling:
% Forward convolution
Sj = dlconv(Fjmin1, Wj, Bj, …
‘Stride’, conv_params.stride, …
‘Padding’, conv_params.padding, …
‘DilationFactor’, conv_params.dilation);
% activation function (and derivative)
Oj = max(Sj, 0); Fprimej = sign(Oj);
%% Time AVERAGE POOLING
fprintf(‘=== AVERAGE POOLING (conv_af_ap) ===n’);
times_ap = struct();
for iter = 1:num_iter
% Average pooling
tic;
Oj_pooled = avgpool(Oj, pool_params.pool_size, …
‘Stride’, pool_params.pool_stride, …
‘Padding’, pool_params.pool_padding);
wait(g);
times_ap.pooling(iter) = toc;
end
%% Time MAX POOLING
fprintf(‘n=== MAX POOLING (conv_af_mp) ===n’);
times_mp = struct();
for iter = 1:num_iter
% Max pooling with indices
tic;
[Oj_pooled, max_indices] = maxpool(Oj, pool_params.pool_size, …
‘Stride’, pool_params.pool_stride, …
‘Padding’, pool_params.pool_padding);
wait(g);
times_mp.pooling(iter) = toc;
end
%% Compute statistics and display results
fprintf(‘n=== TIMING RESULTS (milliseconds) ===n’);
fprintf(‘%-25s %12s %12s %12sn’, ‘Step’, ‘AvgPool’, ‘MaxPool’, ‘Difference’);
fprintf(‘%sn’, repmat(‘-‘, 1, 65));
steps_common = { ‘pooling’};
total_ap = 0;
total_mp = 0;
for i = 1:length(steps_common)
step = steps_common{i};
if isfield(times_ap, step) && isfield(times_mp, step)
mean_ap = mean(times_ap.(step)) * 1000; % times 1000 to convert seconds to milliseconds
mean_mp = mean(times_mp.(step)) * 1000; % times 1000 to convert seconds to milliseconds
total_ap = total_ap + mean_ap;
total_mp = total_mp + mean_mp;
diff = mean_mp – mean_ap;
fprintf(‘%-25s %12.4f %12.4f %+12.4fn’, step, mean_ap, mean_mp, diff);
end
end
fprintf(‘%sn’, repmat(‘-‘, 1, 65));
%fprintf(‘%-25s %12.4f %12.4f %+12.4fn’, ‘TOTAL’, total_ap, total_mp, total_mp – total_ap);
fprintf(‘%-25s %12s %12s %12.2fxn’, ‘Speedup’, ”, ”, total_mp/total_ap);
end
Now you can see one main difference between my calling of maxpool and avgpool: for avgpool I only have 1 output (the pooled values), but with maxpool I have 2 outputs (the pooled values, and the index locations of these max values).
This is important because if I replaced the call to maxpool with only requesting the pooled values (1 output), maxpool is faster as expected:
>> analyze_pooling_timing
GPU: NVIDIA GeForce RTX 5080
Running 1000 iterations for each timing measurement…
=== AVERAGE POOLING (conv_af_ap) ===
=== MAX POOLING (conv_af_mp) ===
=== TIMING RESULTS (milliseconds) ===
Step AvgPool MaxPool Difference
—————————————————————–
pooling 0.4358 0.2330 -0.2028
—————————————————————–
max/avg 0.53x
>>
However if I call maxpool like here with 2 outputs (pooled values and indices), or with 3 outputs (pooled values, indices, and inputSize), this is where maxpool is extremely slow:
>> analyze_pooling_timing
GPU: NVIDIA GeForce RTX 5080
Running 1000 iterations for each timing measurement…
=== AVERAGE POOLING (conv_af_ap) ===
=== MAX POOLING (conv_af_mp) ===
=== TIMING RESULTS (milliseconds) ===
Step AvgPool MaxPool Difference
—————————————————————–
pooling 0.4153 38.9818 +38.5665
—————————————————————–
max/avg 93.86x
>>
So it appears that this is the reason why maxpool is so much slower: requesting the maxpool function to also output the indices leads to a massive slowdown.
Unfortunately, the indices are needed to later differentiate (backwards pass) the maxpool layer… so I need the indices…
I’d assume that whenever someone wants to train a CNN in matlab using a maxpool layer, they would have to call maxpool with indices, and thus I’d expect a similar slowdown…
Can anyone here comment on this? My application needs me to train maxpool layers, so I need to both forward pass and backwards pass through them, thus I need these indices. But it appears that matlab’s version of maxpool may not be the best implementation of a maxpool operation… maybe some inefficient processes regarding obtaining the indices? deep learning, maxpool, avgpool, indexing MATLAB Answers — New Questions
How to adjust MATLAB Contour Plot’s colorbar to display a selected range of the colormap without changing the actual data limits (‘CLim’)?
I am trying to adjust the Contour Plot’s Colorbar in MATLAB to display a selected range of the colormap without altering the actual data limits (‘CLim’). How can I achieve this?I am trying to adjust the Contour Plot’s Colorbar in MATLAB to display a selected range of the colormap without altering the actual data limits (‘CLim’). How can I achieve this? I am trying to adjust the Contour Plot’s Colorbar in MATLAB to display a selected range of the colormap without altering the actual data limits (‘CLim’). How can I achieve this? colorbar, contour MATLAB Answers — New Questions
How do I change column headers in my table
I have some data that I am displaying in my script and have outputted it in 28 different tables. My script is quite long so I am just posting a screenshot for now.
Where is says var 2 I want to change to the storey number. So for table 1 I want storey 1, for table 2, storey 2 etc…
If you can see for the main output i am running a loop and using the matrix called "Int_force". I also have made the row headers as [Axial Force, Shear Force, Bending Moment. Thus, I am hoping someone can help me so that I can change the column headers to my desired wish? I am not sure how I would change the loop to accomplish this.
Many thanks,
ScottI have some data that I am displaying in my script and have outputted it in 28 different tables. My script is quite long so I am just posting a screenshot for now.
Where is says var 2 I want to change to the storey number. So for table 1 I want storey 1, for table 2, storey 2 etc…
If you can see for the main output i am running a loop and using the matrix called "Int_force". I also have made the row headers as [Axial Force, Shear Force, Bending Moment. Thus, I am hoping someone can help me so that I can change the column headers to my desired wish? I am not sure how I would change the loop to accomplish this.
Many thanks,
Scott I have some data that I am displaying in my script and have outputted it in 28 different tables. My script is quite long so I am just posting a screenshot for now.
Where is says var 2 I want to change to the storey number. So for table 1 I want storey 1, for table 2, storey 2 etc…
If you can see for the main output i am running a loop and using the matrix called "Int_force". I also have made the row headers as [Axial Force, Shear Force, Bending Moment. Thus, I am hoping someone can help me so that I can change the column headers to my desired wish? I am not sure how I would change the loop to accomplish this.
Many thanks,
Scott tables, for loop, matrices MATLAB Answers — New Questions
How to resample LTE waveforms generated using the LTE Waveform Generator in Simulink R2024b?
I have used the "resample" function in MATLAB to upsample LTE waveforms generated using the LTE Waveform Generator. I would like to do the same in Simulink. What blocks can I use to resample LTE waveforms in Simulink? I attempted to use the "Upsample" block in Simulink, but I do not get results similar to what I get in MATLAB.I have used the "resample" function in MATLAB to upsample LTE waveforms generated using the LTE Waveform Generator. I would like to do the same in Simulink. What blocks can I use to resample LTE waveforms in Simulink? I attempted to use the "Upsample" block in Simulink, but I do not get results similar to what I get in MATLAB. I have used the "resample" function in MATLAB to upsample LTE waveforms generated using the LTE Waveform Generator. I would like to do the same in Simulink. What blocks can I use to resample LTE waveforms in Simulink? I attempted to use the "Upsample" block in Simulink, but I do not get results similar to what I get in MATLAB. ltewaveform, resampling, upsampling, firinterpolation, firrateconversion MATLAB Answers — New Questions
Is there a way to edit the default size of an edit control in a livescript?
Is there any way to change the width of the edit controls in a livescript, either globally or on a per-control basis?
The edit box is a bit small when using it to enter Matlab code etc. and I would like to have a bit of control over it’s width.
The screenshot below shows the limited width of the edit box,
It would be nice if it was longer.Is there any way to change the width of the edit controls in a livescript, either globally or on a per-control basis?
The edit box is a bit small when using it to enter Matlab code etc. and I would like to have a bit of control over it’s width.
The screenshot below shows the limited width of the edit box,
It would be nice if it was longer. Is there any way to change the width of the edit controls in a livescript, either globally or on a per-control basis?
The edit box is a bit small when using it to enter Matlab code etc. and I would like to have a bit of control over it’s width.
The screenshot below shows the limited width of the edit box,
It would be nice if it was longer. livescript, mlx MATLAB Answers — New Questions
Summarize Email Thread Feature Coming to Outlook
Releasing Features like Summarize Email Thread without Microsoft 365 Copilot Licenses is Just Business
Those who are surprised by Microsoft making Copilot features in Office to users without a Microsoft 365 Copilot license don’t understand that it’s simply a matter of business. If Microsoft doesn’t make basic AI features available within Office, ISVs will fill the vacuum by selling add-ons to integrate ChatGPT or other AI with Outlook. If customers buy ChatGPT integrations, it removes opportunity for Microsoft to sell Microsoft 365 Copilot licenses.
Message center notification MC1124564 (updated 12 August 2025, Microsoft 365 Roadmap item 498320) is a good example. This post announces that the option to summarize email threads will be available in Outlook even for users without a Microsoft 365 Copilot license provided Copilot chat is pinned to the navigation bar. The feature is available in Outlook Classic (subscription version), the new Outlook for Windows, and OWA if they have enabled Copilot chat by pinning the app to the navigation bar. This option is controlled by a setting in the Copilot section of the Microsoft 365 admin center (Figure 1).

Targeted release users should see the feature between late August 2025 and mid-September 2025, with general availability following in between mid-September and mid-November 2025.
Summarizing Email Threads
Generative AI always creates the best results when it has a well-defined set of data to process. Just like users who have Microsoft 365 Copilot licenses, Outlook users without a Copilot license will see a Summarize button in the reading pane. Choosing the option calls Copilot to process the email thread to create a summary by extracting the most important points from the thread. Even in a single-item thread, summarization can be valuable by confirming critical issues raised in a message.
Summarizing an email thread doesn’t include other Copilot features like summarizing attachments for a message.
The Business Question
If Microsoft didn’t offer thread summarization in Outlook, customers can find the same functionality available in ISV offerings such as AI MailMaestro (ChatGPT for Outlook), available in the Microsoft app store, which includes the ability to summarize “any email for immediate thread analysis and key points” at a price point where “Copilot is 2.5x more expensive than MailMaestro.”
This is not the only example of an Outlook add-in for ChatGPT (here’s another picked at random). OpenAI has their own connector for Outlook email (and others for Outlook calendar, Teams, and SharePoint Online). Using add-ins and connectors creates security, app management, and compliance questions for Microsoft 365 tenants, but some organizations are happy with the trade-off to gain AI features at reduced cost.
No doubt Microsoft will emphasize to customers that their version of the OpenAI software is specially tailored to the demands of Microsoft 365 in a way that a general-purpose LLM cannot be. However, price is a powerful influence and ChatGPT is a very popular solution.
From a Microsoft perspective, if customers embrace OpenAI-based third-party solutions and deploy add-ins or connectors to extend the Office apps, Microsoft loses some degree of account control and their potential to sell Microsoft 365 Copilot licenses is reduced. Neither outcome is an attractive prospect, especially in large enterprise accounts.
In the context of wanting to protect the Office franchise, it’s understandable why Microsoft should make a limited subset of AI-driven features available to users of the Microsoft 365 enterprise apps (subscription version of Office). Apart from making third-party offerings less attractive, getting Copilot’s proverbial foot in the door is likely to encourage investigation of other Copilot functionality like email prioritization that might lead to future purchases.
Raising the Ante
I’ve nothing against Microsoft adding features to Outlook where it makes sense. Summarizing email threads is an example of where everyone can gain from AI, so it seems sensible to add it to Outlook. The fact that adding the feature helps Microsoft to compete with ISVs might seem regrettable, but it’s just business.
In some scenarios, adding features like this might be deemed anti-competitive, but there is plenty of room for ISVs to compete with Microsoft to exploit AI, and including basic features like summarization rapidly becomes the ante to participate in the market.
So much change, all the time. It’s a challenge to stay abreast of all the updates Microsoft makes across the Microsoft 365 ecosystem. Subscribe to the Office 365 for IT Pros eBook to receive insights updated monthly into what happens within Microsoft 365, why it happens, and what new features and capabilities mean for your tenant.
Looking for a reactive zoom implementation
I’m trying to pick up the zoom event from an active figure. I have a figure with the world’s coastlines, and a number of events and their geographical coordinates. I Want to make a 2d density map of the events over the coastline. The map should be smoothed with a gaussian filter with a width that is dependent on the level of zoom, such that when I zoom in I get a more.dtailed view. Has anyone implemented something similar already? Even a minimal working example would be great.I’m trying to pick up the zoom event from an active figure. I have a figure with the world’s coastlines, and a number of events and their geographical coordinates. I Want to make a 2d density map of the events over the coastline. The map should be smoothed with a gaussian filter with a width that is dependent on the level of zoom, such that when I zoom in I get a more.dtailed view. Has anyone implemented something similar already? Even a minimal working example would be great. I’m trying to pick up the zoom event from an active figure. I have a figure with the world’s coastlines, and a number of events and their geographical coordinates. I Want to make a 2d density map of the events over the coastline. The map should be smoothed with a gaussian filter with a width that is dependent on the level of zoom, such that when I zoom in I get a more.dtailed view. Has anyone implemented something similar already? Even a minimal working example would be great. zoom, callbacks, histogram, density MATLAB Answers — New Questions
Issue on running polyspace bugfinder qualification kit
I was trying to run the polyspace qualification kit but ihave an issue on a ReportGeneratorQual.pm module
The module can’t be found (i have also installed perl) and i can’t find online anything about this module
How can i install/find this module?I was trying to run the polyspace qualification kit but ihave an issue on a ReportGeneratorQual.pm module
The module can’t be found (i have also installed perl) and i can’t find online anything about this module
How can i install/find this module? I was trying to run the polyspace qualification kit but ihave an issue on a ReportGeneratorQual.pm module
The module can’t be found (i have also installed perl) and i can’t find online anything about this module
How can i install/find this module? polyspace, do, qualification, qual MATLAB Answers — New Questions
Question with respect to NPUSCH BLER Example
Hello All,
I am trying to simulate the NPUSCH BLER example and would like to understand the hidden assumptions when we disable perfect channel estimation for this example? The simulation results in BLER = 0 for SNR = -25dB when SCS=3.75kHz for 1RU and 1 subcarrier. I would like to use a custom channel (say Rician) for example. I have made the changes accordingly, but it does not seem to work. Please could you help me figure out what is happening in the background?
Update: I realised that noiseEst is always 0 when ‘perfectChannelEstimate=false’ and this is giving me wrong results for lower SNR.
Thanks in advanceHello All,
I am trying to simulate the NPUSCH BLER example and would like to understand the hidden assumptions when we disable perfect channel estimation for this example? The simulation results in BLER = 0 for SNR = -25dB when SCS=3.75kHz for 1RU and 1 subcarrier. I would like to use a custom channel (say Rician) for example. I have made the changes accordingly, but it does not seem to work. Please could you help me figure out what is happening in the background?
Update: I realised that noiseEst is always 0 when ‘perfectChannelEstimate=false’ and this is giving me wrong results for lower SNR.
Thanks in advance Hello All,
I am trying to simulate the NPUSCH BLER example and would like to understand the hidden assumptions when we disable perfect channel estimation for this example? The simulation results in BLER = 0 for SNR = -25dB when SCS=3.75kHz for 1RU and 1 subcarrier. I would like to use a custom channel (say Rician) for example. I have made the changes accordingly, but it does not seem to work. Please could you help me figure out what is happening in the background?
Update: I realised that noiseEst is always 0 when ‘perfectChannelEstimate=false’ and this is giving me wrong results for lower SNR.
Thanks in advance nbiot, npusch, bler MATLAB Answers — New Questions
How do I find the mode of an array and verify it using the frequency?
A very basic question, but I am just getting back into it and I’m struggling to find the right answer on the forums (or I very bad at using Google?).
I have an array which is 100×50 and want to find the mode of each row. When I run the below function, it provides a 98% correct solution. Two of the rows have a mix of values and it is not as easy to tell the mode, whereas the other 98 rows have a distint value. Is it possible to set a threshold so that out of the row, if the value is only less than 30% of the total value then ignore it?
So if row 99 has a value of 1 as the mode, but 1 only occurs 20% (10/50) of the time in the total row then ignore it.
mode(a(:,:));A very basic question, but I am just getting back into it and I’m struggling to find the right answer on the forums (or I very bad at using Google?).
I have an array which is 100×50 and want to find the mode of each row. When I run the below function, it provides a 98% correct solution. Two of the rows have a mix of values and it is not as easy to tell the mode, whereas the other 98 rows have a distint value. Is it possible to set a threshold so that out of the row, if the value is only less than 30% of the total value then ignore it?
So if row 99 has a value of 1 as the mode, but 1 only occurs 20% (10/50) of the time in the total row then ignore it.
mode(a(:,:)); A very basic question, but I am just getting back into it and I’m struggling to find the right answer on the forums (or I very bad at using Google?).
I have an array which is 100×50 and want to find the mode of each row. When I run the below function, it provides a 98% correct solution. Two of the rows have a mix of values and it is not as easy to tell the mode, whereas the other 98 rows have a distint value. Is it possible to set a threshold so that out of the row, if the value is only less than 30% of the total value then ignore it?
So if row 99 has a value of 1 as the mode, but 1 only occurs 20% (10/50) of the time in the total row then ignore it.
mode(a(:,:)); mode, frequency, array MATLAB Answers — New Questions
Cross-Coupled System Identification
Hi everyone, I aim to build a model of my system using real-time data. My system is a MIMO (Multiple Input Multiple Output) system, where the inputs are RPM and rudder angle, and the outputs are linear velocity and angular velocity. Also the system is cross-coupled, so both inputs effect both outputs and I think that there is nonlinear model. I want to implemented system identification model. I collected data for train and validation. Then, I implemented filter and prepared for system identification. I tried different models such as TF, State Space and Nonlinear models. I see that best models estimated by Nonlinear ARX model. But I don’t know how can I find the best model fit because there are many options in Nonlinear ARX model window. For example there are many options in Nonlinear Function bar such as Wavelet, Sigmoid, Neural, Gaussian etc… So how can i find best model fit. Do I need to write code about grid search may be it finds best possible trying different conditions? Do you have any recommendation?Hi everyone, I aim to build a model of my system using real-time data. My system is a MIMO (Multiple Input Multiple Output) system, where the inputs are RPM and rudder angle, and the outputs are linear velocity and angular velocity. Also the system is cross-coupled, so both inputs effect both outputs and I think that there is nonlinear model. I want to implemented system identification model. I collected data for train and validation. Then, I implemented filter and prepared for system identification. I tried different models such as TF, State Space and Nonlinear models. I see that best models estimated by Nonlinear ARX model. But I don’t know how can I find the best model fit because there are many options in Nonlinear ARX model window. For example there are many options in Nonlinear Function bar such as Wavelet, Sigmoid, Neural, Gaussian etc… So how can i find best model fit. Do I need to write code about grid search may be it finds best possible trying different conditions? Do you have any recommendation? Hi everyone, I aim to build a model of my system using real-time data. My system is a MIMO (Multiple Input Multiple Output) system, where the inputs are RPM and rudder angle, and the outputs are linear velocity and angular velocity. Also the system is cross-coupled, so both inputs effect both outputs and I think that there is nonlinear model. I want to implemented system identification model. I collected data for train and validation. Then, I implemented filter and prepared for system identification. I tried different models such as TF, State Space and Nonlinear models. I see that best models estimated by Nonlinear ARX model. But I don’t know how can I find the best model fit because there are many options in Nonlinear ARX model window. For example there are many options in Nonlinear Function bar such as Wavelet, Sigmoid, Neural, Gaussian etc… So how can i find best model fit. Do I need to write code about grid search may be it finds best possible trying different conditions? Do you have any recommendation? model, system MATLAB Answers — New Questions
Running MATLAB R2024a headless on Ubuntu 24.04 causes GUI errors
Hello,
I’ve installed MATLAB R2024a on a Ubuntu Server 24.04.3 LTS machine. My goal is to run it headlessly (without a GUI), so I try launching it with:
matlab -nodesktop -nojvm -nosplash
However, when doing this, MATLAB attempts to use a graphical interface and triggers errors related to missing graphical libraries (libXrandr, libgdk_pixbuf, libXinerama, etc.).
The error messages appear to come from a temporary ServiceHost installer located at:
~/.MathWorks/ServiceHost/<machine_name>/_tmp_MSHI_xxxxx/mci/_tempinstaller_glnxa64/bin/glnxa64/InstallMathWorksServiceHost
It seems that the ServiceHost installer is trying to use GUI components, which doesn’t work in this headless server environment.
My questions are:
Is there a way to disable or bypass the ServiceHost installation when running MATLAB in headless mode?
Alternatively, is there a way to install ServiceHost in a non-GUI/headless manner so MATLAB won’t attempt to launch the graphical installer?Hello,
I’ve installed MATLAB R2024a on a Ubuntu Server 24.04.3 LTS machine. My goal is to run it headlessly (without a GUI), so I try launching it with:
matlab -nodesktop -nojvm -nosplash
However, when doing this, MATLAB attempts to use a graphical interface and triggers errors related to missing graphical libraries (libXrandr, libgdk_pixbuf, libXinerama, etc.).
The error messages appear to come from a temporary ServiceHost installer located at:
~/.MathWorks/ServiceHost/<machine_name>/_tmp_MSHI_xxxxx/mci/_tempinstaller_glnxa64/bin/glnxa64/InstallMathWorksServiceHost
It seems that the ServiceHost installer is trying to use GUI components, which doesn’t work in this headless server environment.
My questions are:
Is there a way to disable or bypass the ServiceHost installation when running MATLAB in headless mode?
Alternatively, is there a way to install ServiceHost in a non-GUI/headless manner so MATLAB won’t attempt to launch the graphical installer? Hello,
I’ve installed MATLAB R2024a on a Ubuntu Server 24.04.3 LTS machine. My goal is to run it headlessly (without a GUI), so I try launching it with:
matlab -nodesktop -nojvm -nosplash
However, when doing this, MATLAB attempts to use a graphical interface and triggers errors related to missing graphical libraries (libXrandr, libgdk_pixbuf, libXinerama, etc.).
The error messages appear to come from a temporary ServiceHost installer located at:
~/.MathWorks/ServiceHost/<machine_name>/_tmp_MSHI_xxxxx/mci/_tempinstaller_glnxa64/bin/glnxa64/InstallMathWorksServiceHost
It seems that the ServiceHost installer is trying to use GUI components, which doesn’t work in this headless server environment.
My questions are:
Is there a way to disable or bypass the ServiceHost installation when running MATLAB in headless mode?
Alternatively, is there a way to install ServiceHost in a non-GUI/headless manner so MATLAB won’t attempt to launch the graphical installer? installation, servicehost, linux, ubuntu, headless MATLAB Answers — New Questions
Microsoft 365 Tenants Need Vanity Domains to Send External Email
Severe Limitations to be Applied to Outbound Email from MOERA Domains
On August 20, 2025, Microsoft announced their latest step to limit misuse of Exchange Online by spammers by limiting the ability of mailboxes with primary SMTP addresses based on a MOERA (Microsoft Online Exchange Routing Address) to send email to external recipients. For years, spammers have created Microsoft 365 tenants and promptly used the tenant to send email. New tenants come with a default sub-domain in the onmicrosoft.com domain, like spamforus.onmicrosoft.com (a MOERA domain).
New mailboxes created in the domain receive primary SMTP addresses based on the MOERA domain, like John.Smith@spamforus.onmicrosoft.com. All of this is deliberate and intended to allow new tenants to be able to send email. However, spammers take advantage of Microsoft’s email routing infrastructure to share their content with anyone they can reach. The consequence is that onmicrosoft.com domains have poor reputations, so much so that many tenants block all email from onmicrosoft.com domains to reduce the amount of spam that reaches user mailboxes.
Microsoft has acted elsewhere in Microsoft 365 to limit the communication horizon for trial tenants by blocking federated Teams chat. Receiving an unwanted chat from some unknown external sender is the rough equivalent of receiving email spam. It’s a distraction and interrupts real work, so actions to limit unwanted communications is always welcome.
In this case, Microsoft will introduce a new throttling restriction to limit tenants that use MOERA domains to 100 external recipients per 24-hour rolling window. That’s very restrictive because the limit applies to email sent across an entire organization. Once the threshold is reached, the Exchange transport system refuses to accept more outgoing email and responds to the sender with a non-delivery notification with a 550 5.7.236 code.
Throttling Starts in October 2025
As is normal when Microsoft introduces a new email send threshold for Exchange Online, the new limit will roll out from October 15, 2025, starting with trial tenants and slowly progressing through small and medium tenants until the final step on June 1, 2026, when the limit applies to tenants with more than 10,000 accounts with paid Exchange licenses (“paid seats”).
The solution to avoid throttling is to acquire a regular domain and add it as an accepted domain for Exchange Online in the Microsoft 365 admin center (Figure 1). Sometimes these domains are referred to as “vanity” domains because they become part of an organization’s branding strategy, much like we use the office365itpros.com for email and this site.

I can’t imagine running a Microsoft 365 tenant with more than 10,000 accounts that doesn’t use a regular domain. It’s not as if acquiring a domain is expensive. Many cost less than $50/year from a domain registrar like Godaddy.com or WordPress.com.
Finding MOERA Senders
Microsoft recommends using the message trace facility to find e mail sent using a tenant’s MOERA domain. That’s certainly one way to approach the problem, but it won’t reveal all the problem mailboxes. A better idea is to use the Get-ExoMailbox cmdlet to search for mailboxes whose primary SMTP address uses the MOERA domain. This code shows how to look for user, shared, and group mailboxes that need to have their primary SMTP address updated to a regular domain. The code excludes mailboxes created for accounts in other tenants in a multi-tenant organization.
Get-EXOMailbox -ResultSize Unlimited -RecipientTypeDetails UserMailbox, SharedMailbox, GroupMailbox | Where-Object {$_.PrimarySmtpAddress -like "*.onmicrosoft.com*" -and $_.PrimarySMTPAddress -notLike "*#EXT*"} | Sort-Object DisplayName | Format-Table DisplayName, PrimarySmtpAddress, RecipientTypeDetails -AutoSize DisplayName PrimarySmtpAddress RecipientTypeDetails ----------- ------------------ -------------------- "Popeye" Doyle Popeye.Doyle@o365maestros.onmicrosoft.com UserMailbox Adele Vance AdeleV@o365maestros.onmicrosoft.com UserMailbox Alain Charnier Alain.Charnier@o365maestros.onmicrosoft.com UserMailbox Break Glass Break.Glass@o365maestros.onmicrosoft.com UserMailbox Buddy Russo Buddy.Russo@o365maestros.onmicrosoft.com UserMailbox
Later, the code can be used to find the affected mailboxes before updating their primary SMTP address with a new address belonging to the regular domain. For example, if the new domain is Beachdums.com, the command to update a mailbox is something like this:
Set-Mailbox -Identity Buddy.Russo@o365maestros.onmicrosoft.com -WindowsEmailAddress Buddy.Russo@beachdums.com
To ensure that messages addressed to the previous address can be delivered, Exchange Online keeps the address in the EmailAddresses property of the mailbox.
The Microsoft article contains other points that need to be attended to after switching domains. None of these are difficult tasks, but detail is important.
Good Change
The battle against spam is longstanding and permanent. Microsoft is closing as many holes as possible to make Exchange Online a poor host for spammers to target. Closing off the MOERA hole is a good step forward that shouldn’t cause too much pain for legitimate tenants. That is, if you don’t use MOERA domain addresses for outbound email.
Insight like this doesn’t come easily. You’ve got to know the technology and understand how to look behind the scenes. Benefit from the knowledge and experience of the Office 365 for IT Pros team by subscribing to the best eBook covering Office 365 and the wider Microsoft 365 ecosystem.
Crash with C Caller in Accelerator mode
I have a proget which uses Ccallers and with release 2025a in crash in accelerator mode.
Same model with 2024b or later it runs fine in accelerator and normal.
In rapid accelerator same issues with older and last release.
I think it is a well known problem. Any advice?
Thanks to all.
DI have a proget which uses Ccallers and with release 2025a in crash in accelerator mode.
Same model with 2024b or later it runs fine in accelerator and normal.
In rapid accelerator same issues with older and last release.
I think it is a well known problem. Any advice?
Thanks to all.
D I have a proget which uses Ccallers and with release 2025a in crash in accelerator mode.
Same model with 2024b or later it runs fine in accelerator and normal.
In rapid accelerator same issues with older and last release.
I think it is a well known problem. Any advice?
Thanks to all.
D ccaller MATLAB Answers — New Questions
define 300 function calls x1(t)—x300(t)
Hi there,
I am using ode15s to resolve a DAE system. Since there are 300 variables x1(t)~x300(t) in my case, so I feel it is a little bit inconvenient to define them by writing
syms x1(t) … x300(t)
one by one. Are there any efficient codes to define them?
Many thanks!Hi there,
I am using ode15s to resolve a DAE system. Since there are 300 variables x1(t)~x300(t) in my case, so I feel it is a little bit inconvenient to define them by writing
syms x1(t) … x300(t)
one by one. Are there any efficient codes to define them?
Many thanks! Hi there,
I am using ode15s to resolve a DAE system. Since there are 300 variables x1(t)~x300(t) in my case, so I feel it is a little bit inconvenient to define them by writing
syms x1(t) … x300(t)
one by one. Are there any efficient codes to define them?
Many thanks! function calls, ode, differential equations MATLAB Answers — New Questions
How to further utilize the “simplify” function for simplification?
I was simplifying the trigonometric function, but despite using the "simplify" function, the result I obtained is as follows.
However, this result can still be further simplified. Could you please tell me where the problem lies?I was simplifying the trigonometric function, but despite using the "simplify" function, the result I obtained is as follows.
However, this result can still be further simplified. Could you please tell me where the problem lies? I was simplifying the trigonometric function, but despite using the "simplify" function, the result I obtained is as follows.
However, this result can still be further simplified. Could you please tell me where the problem lies? symbolic, simplify MATLAB Answers — New Questions
create a code for butterworth 4th order bandpass filter without Signal Processing Toolbox
I am working with Surface EMG data and need to use a 4th order butterworth bandpass filter but don’t have the signal processing toolbox. The cutoff frequency range is from 20 to 450 hz. I am also trying to understand how to come up with the coefficients using this bandpass filter and where I can apply that if I’m using that filtered signal to measure the MVIC of the data.
Thank youI am working with Surface EMG data and need to use a 4th order butterworth bandpass filter but don’t have the signal processing toolbox. The cutoff frequency range is from 20 to 450 hz. I am also trying to understand how to come up with the coefficients using this bandpass filter and where I can apply that if I’m using that filtered signal to measure the MVIC of the data.
Thank you I am working with Surface EMG data and need to use a 4th order butterworth bandpass filter but don’t have the signal processing toolbox. The cutoff frequency range is from 20 to 450 hz. I am also trying to understand how to come up with the coefficients using this bandpass filter and where I can apply that if I’m using that filtered signal to measure the MVIC of the data.
Thank you filter, signal processing MATLAB Answers — New Questions
Matlab unable to parse a Numeric field when I use the gather function on a tall array.
So I have a CSV file with a large amount of datapoints that I want to perform a particular algorithm on. So I created a tall array from the file and wanted to import a small chunk of the data at a time. However, when I tried to use gather to get the small chunk into the memory, I get the following error.
"Board_Ai0" is the header of the CSV file. It is not in present in row 15355 as can be seen below where I opened the csv file in MATLAB’s import tool.
The same algorithm works perfectly fine when I don’t use tall array but instead import the whole file into the memory. However, I have other larger CSV files that I also want to analyze but won’t fit in memory.
UPDATE: So apparently the images were illegible but someone else edited the question to make the size of the image larger so I guess it should be fine now. Also I can’t attach the data files to this question because the data files that give me this problems are all larger than 5 GB.So I have a CSV file with a large amount of datapoints that I want to perform a particular algorithm on. So I created a tall array from the file and wanted to import a small chunk of the data at a time. However, when I tried to use gather to get the small chunk into the memory, I get the following error.
"Board_Ai0" is the header of the CSV file. It is not in present in row 15355 as can be seen below where I opened the csv file in MATLAB’s import tool.
The same algorithm works perfectly fine when I don’t use tall array but instead import the whole file into the memory. However, I have other larger CSV files that I also want to analyze but won’t fit in memory.
UPDATE: So apparently the images were illegible but someone else edited the question to make the size of the image larger so I guess it should be fine now. Also I can’t attach the data files to this question because the data files that give me this problems are all larger than 5 GB. So I have a CSV file with a large amount of datapoints that I want to perform a particular algorithm on. So I created a tall array from the file and wanted to import a small chunk of the data at a time. However, when I tried to use gather to get the small chunk into the memory, I get the following error.
"Board_Ai0" is the header of the CSV file. It is not in present in row 15355 as can be seen below where I opened the csv file in MATLAB’s import tool.
The same algorithm works perfectly fine when I don’t use tall array but instead import the whole file into the memory. However, I have other larger CSV files that I also want to analyze but won’t fit in memory.
UPDATE: So apparently the images were illegible but someone else edited the question to make the size of the image larger so I guess it should be fine now. Also I can’t attach the data files to this question because the data files that give me this problems are all larger than 5 GB. tall, gather MATLAB Answers — New Questions
How Do Assignment and Deletion Work with an Empty Index on the Left Hand Side ?
I’ve always thought that indexing with an empty array on the LHS of an = is a no-op for the variable on the LHS (there could be side effects of the evaluation of the RHS).
For example
% Case 1
A = [1 2;3 4];
A([]) = 5
But, assigning a non-scalar on the RHS result in an error
% Case 2
try
A([]) = [5,6];
catch ME
ME.message
end
Why doesn’t Case 1 result in an error insofar as it also had a different number of elements on the left and right sides?
Using the deletion operator on the RHS with an empty index on the LHS reshapes the matrix
% Case 3
A = [1 2;3 4];
A([]) = []
Why does a request to delete nothing from A change A?
But that doesn’t happen if A is column vector
% Case 4
A = [1;2;3;4];
A([]) = []
Deletion with a non-zero dimension of an empty index also reshapes the LHS
% Case 5
A = [1 2;3 4];
A(double.empty(0,1)) = []
A = [1 2; 3 4];
A(double.empty(1,0)) = []
I suppose Cases 3 and 5 are related to deleting a specific (non-empty) element from a matrix
% Case 6
A = [1 2;3 4];
A(3) = []
But I’m surprised that the result is not a column vector.
Are any/all of these results following a general rule? Pointers to relevant documentation would be welcome. I couldn’t find any.I’ve always thought that indexing with an empty array on the LHS of an = is a no-op for the variable on the LHS (there could be side effects of the evaluation of the RHS).
For example
% Case 1
A = [1 2;3 4];
A([]) = 5
But, assigning a non-scalar on the RHS result in an error
% Case 2
try
A([]) = [5,6];
catch ME
ME.message
end
Why doesn’t Case 1 result in an error insofar as it also had a different number of elements on the left and right sides?
Using the deletion operator on the RHS with an empty index on the LHS reshapes the matrix
% Case 3
A = [1 2;3 4];
A([]) = []
Why does a request to delete nothing from A change A?
But that doesn’t happen if A is column vector
% Case 4
A = [1;2;3;4];
A([]) = []
Deletion with a non-zero dimension of an empty index also reshapes the LHS
% Case 5
A = [1 2;3 4];
A(double.empty(0,1)) = []
A = [1 2; 3 4];
A(double.empty(1,0)) = []
I suppose Cases 3 and 5 are related to deleting a specific (non-empty) element from a matrix
% Case 6
A = [1 2;3 4];
A(3) = []
But I’m surprised that the result is not a column vector.
Are any/all of these results following a general rule? Pointers to relevant documentation would be welcome. I couldn’t find any. I’ve always thought that indexing with an empty array on the LHS of an = is a no-op for the variable on the LHS (there could be side effects of the evaluation of the RHS).
For example
% Case 1
A = [1 2;3 4];
A([]) = 5
But, assigning a non-scalar on the RHS result in an error
% Case 2
try
A([]) = [5,6];
catch ME
ME.message
end
Why doesn’t Case 1 result in an error insofar as it also had a different number of elements on the left and right sides?
Using the deletion operator on the RHS with an empty index on the LHS reshapes the matrix
% Case 3
A = [1 2;3 4];
A([]) = []
Why does a request to delete nothing from A change A?
But that doesn’t happen if A is column vector
% Case 4
A = [1;2;3;4];
A([]) = []
Deletion with a non-zero dimension of an empty index also reshapes the LHS
% Case 5
A = [1 2;3 4];
A(double.empty(0,1)) = []
A = [1 2; 3 4];
A(double.empty(1,0)) = []
I suppose Cases 3 and 5 are related to deleting a specific (non-empty) element from a matrix
% Case 6
A = [1 2;3 4];
A(3) = []
But I’m surprised that the result is not a column vector.
Are any/all of these results following a general rule? Pointers to relevant documentation would be welcome. I couldn’t find any. empty index, deletion, assignment MATLAB Answers — New Questions