Month: February 2026
How to load data from Octave?
I have a .M file from Octave that I want to run in MATLAB, but I obtain an error. Here is the content of .M file (it’s just one line):
load "Slovenia_centered.mat"
And here is the error:
Error using load
Unable to read file ‘"Slovenia_centered.mat"’: Invalid argument.
Error in
Slovenia_centered (line 1)
load "Slovenia_centered.mat"
^^^^I have a .M file from Octave that I want to run in MATLAB, but I obtain an error. Here is the content of .M file (it’s just one line):
load "Slovenia_centered.mat"
And here is the error:
Error using load
Unable to read file ‘"Slovenia_centered.mat"’: Invalid argument.
Error in
Slovenia_centered (line 1)
load "Slovenia_centered.mat"
^^^^ I have a .M file from Octave that I want to run in MATLAB, but I obtain an error. Here is the content of .M file (it’s just one line):
load "Slovenia_centered.mat"
And here is the error:
Error using load
Unable to read file ‘"Slovenia_centered.mat"’: Invalid argument.
Error in
Slovenia_centered (line 1)
load "Slovenia_centered.mat"
^^^^ load MATLAB Answers — New Questions
What are the Differences between Simulink “Powertrain blockset” and “Simscape Driveline” in the case of developing a Hybrid Electric Vehicle?
Hello all. I am working on developing Energy Management Strategies for Hybrid Electric Vehicles using MATLAB and Simulink. I am now in the modelling phase and am quite stuck and confused over which approach to take. Should I develop the powertrain components in Simulink using "MATLAB FCN" blocks, or should I use Simulink libraries? If the latter, which library should I choose? Powertrain Blockset or Simscape Driveline? Which is more suited for my application? Pros and cons for both? Thanks for any help.Hello all. I am working on developing Energy Management Strategies for Hybrid Electric Vehicles using MATLAB and Simulink. I am now in the modelling phase and am quite stuck and confused over which approach to take. Should I develop the powertrain components in Simulink using "MATLAB FCN" blocks, or should I use Simulink libraries? If the latter, which library should I choose? Powertrain Blockset or Simscape Driveline? Which is more suited for my application? Pros and cons for both? Thanks for any help. Hello all. I am working on developing Energy Management Strategies for Hybrid Electric Vehicles using MATLAB and Simulink. I am now in the modelling phase and am quite stuck and confused over which approach to take. Should I develop the powertrain components in Simulink using "MATLAB FCN" blocks, or should I use Simulink libraries? If the latter, which library should I choose? Powertrain Blockset or Simscape Driveline? Which is more suited for my application? Pros and cons for both? Thanks for any help. simulink, hev, driveline, powertrainblockset MATLAB Answers — New Questions
Asha Sharma named EVP and CEO, Microsoft Gaming
Satya Nadella, Chairman and CEO, and members of his executive team shared the following communications with employees today.
SATYA NADELLA MESSAGE
Gaming has been part of Microsoft from the start. Flight Simulator shipped before Windows, and you can practically ray‑trace a line from DirectX in the ’90s to the accelerated‑compute era we’re in today.
As we celebrate Xbox’s 25th year, the opportunity and innovation agenda in front of us is expansive. Today we reach over 500 million monthly active users, are a top publisher across all platforms, and continue to innovate across gaming hardware, content, and community, in service of creators and players everywhere.
I am long on gaming and its role at the center of our consumer ambition, and as we look ahead, I’m excited to share that Asha Sharma will become Executive Vice President and CEO, Microsoft Gaming, reporting to me. Over the last two years at Microsoft, and previously as Chief Operating Officer at Instacart and a Vice President at Meta, Asha has helped build and scale services that reach billions of people and support thriving consumer and developer ecosystems. She brings deep experience building and growing platforms, aligning business models to long-term value, and operating at global scale, which will be critical in leading our gaming business into its next era of growth.
Matt Booty will become Executive Vice President and Chief Content Officer, reporting to Asha. Matt’s career reflects a lifelong commitment to games and to the people who make them. Under his leadership, Microsoft Gaming has grown to span nearly 40 studios across Xbox, Bethesda, Activision Blizzard, and King, which are home to beloved franchises including Halo, The Elder Scrolls, Call of Duty, World of Warcraft, Diablo, Candy Crush, and Fallout.
Together, Asha and Matt have the right combination of consumer product leadership and gaming depth to push our platform innovation and content pipeline forward. Last year, Phil Spencer made the decision to retire from the company, and since then we’ve been talking about succession planning. I want to thank Phil for his extraordinary leadership and partnership. Over 38 years at Microsoft, including 12 years leading Gaming, Phil helped transform what we do and how we do it. He expanded our reach across PC, mobile, and cloud; nearly tripled the size of the business; helped shape our strategy through the acquisitions of Activision Blizzard, ZeniMax, and Minecraft; and strengthened our culture across our studios and platforms. I’ve long admired Phil’s unwavering commitment to players, creators, and his team, and I am personally grateful for his leadership and counsel. He will continue working closely with Asha to ensure a smooth transition.
We have extraordinary creative talent across our studios and a global platform that is second to none. I’m excited for how we will capture the opportunity ahead and define what comes next, while staying grounded in what players and creators value.
Please join me in congratulating Asha and Matt on their new roles, and in thanking Phil for everything he has done for Microsoft and for our industry.
PHIL SPENCER MESSAGE
When I walked through Microsoft’s doors as an intern in June of 1988, I could never have imagined the products I’d help build, the players and customers we’d serve, or the extraordinary teams I’d be lucky enough to join. It’s been an epic ride and truly the privilege of a lifetime.
Last fall, I shared with Satya that I was thinking about stepping back and starting the next chapter of my life. From that moment, we aligned on approaching this transition with intention, ensuring stability, and strengthening the foundation we’ve built. Xbox has always been more than a business. It’s a vibrant community of players, creators, and teams who care deeply about what we build and how we build it. And it deserves a thoughtful, deliberate plan for the road ahead.
Today marks an exciting new chapter for Microsoft Gaming as Asha Sharma steps into the role of CEO, and I want to be the first to welcome her to this incredible team. Working with her over the past several months has given me tremendous confidence. She brings genuine curiosity, clarity and a deep commitment to understanding players, creators, and the decisions that shape our future. We know this is an important moment for our fans, partners, and team, and we’re committed to getting it right. I’ll remain in an advisory role through the summer to support a smooth handoff.
I’m also grateful for the strength of our studios organization. Matt Booty and our studios teams continue to build an incredible portfolio, and I have full confidence in the leadership and creative momentum across our global studios. I want to congratulate Matt on his promotion to EVP and Chief Content Officer.
As part of this transition, Sarah Bond has decided to leave Microsoft to begin a new chapter. Sarah has been instrumental during a defining period for Xbox, shaping our platform strategy, expanding Game Pass and cloud gaming, supporting new hardware launches, and guiding some of the most significant moments in our history. I’m grateful for her partnership and the impact she’s had, and I wish her the very best in what comes next.
Most of all, to everyone in Microsoft Gaming, I want to say “thank you.” I’ve learned so much from this team and community, grown alongside you, and been continually inspired by the creativity, courage, and care you bring to players, creators, and to one another every day.
I’m incredibly proud of what we’ve built together over the last 25 years, and I have complete confidence in all of you and in the opportunities ahead. I’ll be cheering you on in this next chapter as Xbox’s proudest fan and player.
Phil
XBL: P3
ASHA SHARMA MESSAGE
Dear team,
Today I begin my role as CEO of Microsoft Gaming.
I feel two things at once: humility and urgency.
Humility because this team has built something extraordinary over decades. Urgency because gaming is in a period of rapid change, and we need to move with clarity and conviction.
I am stepping into work shaped by generations of artists, engineers, designers, writers, musicians, operators and more who create worlds that have brought joy and deep personal meaning to hundreds of millions of players. The level of craft here is exceptional, and it is amplified by Xbox, which was founded in the belief that the power of games connects people and pushes the industry forward.
Thank you to Phil for his leadership, and to every studio, platform, and operations team that built this foundation. We are stewards of some of the most loved stories and characters in entertainment and bring players and creators together around the fun and community of gaming in entirely new ways.
My first job is simple: understand what makes this work and protect it.
That starts with three commitments.
First, great games.
Everything begins here. We must have great games beloved by players before we do anything. Unforgettable characters, stories that make us feel, innovative game play, and creative excellence. We will empower our studios, invest in iconic franchises, and back bold new ideas. We will take risks. We will enter new categories and markets where we can add real value, grounded in what players care about most.
I promoted Matt Booty in honor of this commitment. He understands the craft and the challenges of building great games, has led teams that deliver award-winning work, and has earned the trust of game developers across the industry.
Second, the return of Xbox.
We will recommit to our core Xbox fans and players, those who have invested with us for the past 25 years, and to the developers who build the expansive universes and experiences that are embraced by players across the world.
We will celebrate our roots with a renewed commitment to Xbox starting with console which has shaped who we are. It connects us to the players and fans who invest in Xbox, and to the developers who build ambitious experiences for it.
Gaming now lives across devices, not within the limits of any single piece of hardware. As we expand across PC, mobile, and cloud, Xbox should feel seamless, instant, and worthy of the communities we serve. We will break down barriers so developers can build once and reach players everywhere without compromise.
Third, future of play.
We are witnessing the reinvention of play.
To meet the moment, we will invent new business models and new ways to play by leaning into what we already have: iconic teams, characters, and worlds that people love. But we will not treat those worlds as static IP to milk and monetize. We will build a shared platform and tools that empower developers and players to create and share their own stories.
As monetization and AI evolve and influence this future, we will not chase short-term efficiency or flood our ecosystem with soulless AI slop. Games are and always will be art, crafted by humans, and created with the most innovative technology provided by us.
The next 25 years belong to the teams who dare to build something surprising, something no one else is willing to try, and have the patience to see it through. We have done this before, and I am here to help us do it again. I want to return to the renegade spirit that built Xbox in the first place. It will require us to relentlessly question everything, revisit processes, protect what works, and be brave enough to change what does not.
Thank you for welcoming me into this journey.
Asha
MATT BOOTY MESSAGE
I read Phil’s note with much gratitude. He has been a steady champion for game creators and our studio teams, and I’ve learned so much from his leadership over the years. All our games have benefited from his foundational support. I’m also grateful to Satya for his ongoing commitment to gaming and holding a vision of how it can connect back to the larger company.
Looking forward, I’m excited to partner with Asha as our next CEO. Our first conversations centered on her commitment to making great games and the role that plays in our overall success. She asks questions, pushes for clarity, and wants our choices grounded in player and developer needs. That mindset matters as the industry around us is changing quickly: how players engage, how games are made, and how business models and platforms evolve.
We have good reasons to believe in what’s ahead. This organization and its franchises have navigated change for decades, and our strength comes from teams who know how to adapt and keep delivering. That confidence is grounded in a strong pipeline of established franchises, new bets we believe in, and clear player demand for what we are building.
My focus is on supporting the teams and leaders we have in place and creating the conditions for them to do their best work. To be clear, there are no organizational changes underway for our studios.
Thanks for everything you do for players and for each other.
Matt
The post Asha Sharma named EVP and CEO, Microsoft Gaming appeared first on The Official Microsoft Blog.
Satya Nadella, Chairman and CEO, and members of his executive team shared the following communications with employees today. SATYA NADELLA MESSAGE Gaming has been part of Microsoft from the start. Flight Simulator shipped before Windows, and you can practically ray‑trace a line from DirectX in the ’90s to the accelerated‑compute era we’re in today. As…
The post Asha Sharma named EVP and CEO, Microsoft Gaming appeared first on The Official Microsoft Blog.
Read More
How can i isolate an object and find its orientation with respect to another object?
I am new to image processing toolbox in Matlab and I wanted some directions or steps on how I could do this process. In the center of this image in the center of the hole (on the cross hair), I have a piece of material which is almost hexagonal in shape with reddish color. I have multiple pictures similar to this where the material is oriented slightly differently.
Could someone please tell me how can I isolate this object from the rest of the image and then find its orientation? I tried to use image contrast and segmentation, but it wasnt super clean.I am new to image processing toolbox in Matlab and I wanted some directions or steps on how I could do this process. In the center of this image in the center of the hole (on the cross hair), I have a piece of material which is almost hexagonal in shape with reddish color. I have multiple pictures similar to this where the material is oriented slightly differently.
Could someone please tell me how can I isolate this object from the rest of the image and then find its orientation? I tried to use image contrast and segmentation, but it wasnt super clean. I am new to image processing toolbox in Matlab and I wanted some directions or steps on how I could do this process. In the center of this image in the center of the hole (on the cross hair), I have a piece of material which is almost hexagonal in shape with reddish color. I have multiple pictures similar to this where the material is oriented slightly differently.
Could someone please tell me how can I isolate this object from the rest of the image and then find its orientation? I tried to use image contrast and segmentation, but it wasnt super clean. image processing MATLAB Answers — New Questions
Deconvolution using FFT – a classical problem
Hello friends, I am new to signal processing and I am trying to achive deconvolution using FFT. I have an input step function u(t) applied to an impulse response given by . The output function is . I am trying to convolve g and u to get y as well as deconvolve y and g to get u. However, I quite cannot get the right answers. I understand that the deconvolution process is ill-posed and I have to use some kind of normalization process but I am lost. I also apply zero padding to twice the length of the input signals. Any sort of guidance will be appreciated.
After using deconvolution in the fourier domain:
Y = fft(y)
G = fft(g)
X = Y./G
x = ifft(X)
I am getting an output shown below:
Which is not the expected outcome. Can someone shead light on what is happening here? Thank you.Hello friends, I am new to signal processing and I am trying to achive deconvolution using FFT. I have an input step function u(t) applied to an impulse response given by . The output function is . I am trying to convolve g and u to get y as well as deconvolve y and g to get u. However, I quite cannot get the right answers. I understand that the deconvolution process is ill-posed and I have to use some kind of normalization process but I am lost. I also apply zero padding to twice the length of the input signals. Any sort of guidance will be appreciated.
After using deconvolution in the fourier domain:
Y = fft(y)
G = fft(g)
X = Y./G
x = ifft(X)
I am getting an output shown below:
Which is not the expected outcome. Can someone shead light on what is happening here? Thank you. Hello friends, I am new to signal processing and I am trying to achive deconvolution using FFT. I have an input step function u(t) applied to an impulse response given by . The output function is . I am trying to convolve g and u to get y as well as deconvolve y and g to get u. However, I quite cannot get the right answers. I understand that the deconvolution process is ill-posed and I have to use some kind of normalization process but I am lost. I also apply zero padding to twice the length of the input signals. Any sort of guidance will be appreciated.
After using deconvolution in the fourier domain:
Y = fft(y)
G = fft(g)
X = Y./G
x = ifft(X)
I am getting an output shown below:
Which is not the expected outcome. Can someone shead light on what is happening here? Thank you. deconvolution, fft, inverse problem MATLAB Answers — New Questions
How to do Multichannel statistical analysis?
I just need an example:
%% Load data
% Lab says use readtable, then convert to array
T = readtable("Section2.csv");
data = table2array(T);
% Assumption: column 1 is time/index, columns 2:11 are the 10 subjects
X = data(:,2:11); % Nx10 matrix (each column = one subject)
%% =========================
% [3.2] Central tendency + dispersion for each subject
% mean, median, range, std (one value per subject/column)
% =========================
mean_data = mean(X, ‘omitnan’); % 1×10
median_data = median(X, ‘omitnan’); % 1×10
% Use max-min for range (robust if MATLAB "range" gives issues)
range_data = max(X, [], 1) – min(X, [], 1); % 1×10
std_data = std(X, 0, ‘omitnan’); % 1×10 (0 = sample std)
%% =========================
% [3.2] Find which subject has max/min for each metric
% Lab asks to do this by code (not by looking)
% =========================
[max_mean, subj_max_mean] = max(mean_data);
[min_mean, subj_min_mean] = min(mean_data);
[max_median, subj_max_median] = max(median_data);
[min_median, subj_min_median] = min(median_data);
[max_range, subj_max_range] = max(range_data);
[min_range, subj_min_range] = min(range_data);
[max_std, subj_max_std] = max(std_data);
[min_std, subj_min_std] = min(std_data);
% Display summary
fprintf(‘n=== Subject Extremes (Subjects 1-10) ===n’);
fprintf(‘Mean : max = %.2f (Subject %d), min = %.2f (Subject %d)n’, …
max_mean, subj_max_mean, min_mean, subj_min_mean);
fprintf(‘Median : max = %.2f (Subject %d), min = %.2f (Subject %d)n’, …
max_median, subj_max_median, min_median, subj_min_median);
fprintf(‘Range : max = %.2f (Subject %d), min = %.2f (Subject %d)n’, …
max_range, subj_max_range, min_range, subj_min_range);
fprintf(‘Std Dev: max = %.2f (Subject %d), min = %.2f (Subject %d)n’, …
max_std, subj_max_std, min_std, subj_min_std);
%% =========================
% [3.2] Time spent > (mean + 20 bpm), in HOURS
% Lab says each row = 5 seconds
% =========================
function hours = time_over(colVec, meanVal)
% colVec = one subject’s HR time series (Nx1)
% meanVal = mean HR for that subject
count = sum(colVec > (meanVal + 20)); % number of samples above threshold
hours = (count * 5) / 3600; % 5 s/sample -> hours
end
% Compute for all 10 subjects
hours_over = zeros(1,10);
for s = 1:10
hours_over(s) = time_over(X(:,s), mean_data(s));
end
% Optional: display results neatly
fprintf(‘n=== Time above (mean + 20 bpm) ===n’);
for s = 1:10
fprintf(‘Subject %d: %.3f hoursn’, s, hours_over(s));
end
%% Optional: put everything into a table (nice for report/checking)
subject_id = (1:10)’;
resultsTbl = table(subject_id, mean_data’, median_data’, range_data’, std_data’, hours_over’, …
‘VariableNames’, {‘Subject’,’MeanHR’,’MedianHR’,’RangeHR’,’StdHR’,’HoursAboveMeanPlus20′});
disp(resultsTbl);I just need an example:
%% Load data
% Lab says use readtable, then convert to array
T = readtable("Section2.csv");
data = table2array(T);
% Assumption: column 1 is time/index, columns 2:11 are the 10 subjects
X = data(:,2:11); % Nx10 matrix (each column = one subject)
%% =========================
% [3.2] Central tendency + dispersion for each subject
% mean, median, range, std (one value per subject/column)
% =========================
mean_data = mean(X, ‘omitnan’); % 1×10
median_data = median(X, ‘omitnan’); % 1×10
% Use max-min for range (robust if MATLAB "range" gives issues)
range_data = max(X, [], 1) – min(X, [], 1); % 1×10
std_data = std(X, 0, ‘omitnan’); % 1×10 (0 = sample std)
%% =========================
% [3.2] Find which subject has max/min for each metric
% Lab asks to do this by code (not by looking)
% =========================
[max_mean, subj_max_mean] = max(mean_data);
[min_mean, subj_min_mean] = min(mean_data);
[max_median, subj_max_median] = max(median_data);
[min_median, subj_min_median] = min(median_data);
[max_range, subj_max_range] = max(range_data);
[min_range, subj_min_range] = min(range_data);
[max_std, subj_max_std] = max(std_data);
[min_std, subj_min_std] = min(std_data);
% Display summary
fprintf(‘n=== Subject Extremes (Subjects 1-10) ===n’);
fprintf(‘Mean : max = %.2f (Subject %d), min = %.2f (Subject %d)n’, …
max_mean, subj_max_mean, min_mean, subj_min_mean);
fprintf(‘Median : max = %.2f (Subject %d), min = %.2f (Subject %d)n’, …
max_median, subj_max_median, min_median, subj_min_median);
fprintf(‘Range : max = %.2f (Subject %d), min = %.2f (Subject %d)n’, …
max_range, subj_max_range, min_range, subj_min_range);
fprintf(‘Std Dev: max = %.2f (Subject %d), min = %.2f (Subject %d)n’, …
max_std, subj_max_std, min_std, subj_min_std);
%% =========================
% [3.2] Time spent > (mean + 20 bpm), in HOURS
% Lab says each row = 5 seconds
% =========================
function hours = time_over(colVec, meanVal)
% colVec = one subject’s HR time series (Nx1)
% meanVal = mean HR for that subject
count = sum(colVec > (meanVal + 20)); % number of samples above threshold
hours = (count * 5) / 3600; % 5 s/sample -> hours
end
% Compute for all 10 subjects
hours_over = zeros(1,10);
for s = 1:10
hours_over(s) = time_over(X(:,s), mean_data(s));
end
% Optional: display results neatly
fprintf(‘n=== Time above (mean + 20 bpm) ===n’);
for s = 1:10
fprintf(‘Subject %d: %.3f hoursn’, s, hours_over(s));
end
%% Optional: put everything into a table (nice for report/checking)
subject_id = (1:10)’;
resultsTbl = table(subject_id, mean_data’, median_data’, range_data’, std_data’, hours_over’, …
‘VariableNames’, {‘Subject’,’MeanHR’,’MedianHR’,’RangeHR’,’StdHR’,’HoursAboveMeanPlus20′});
disp(resultsTbl); I just need an example:
%% Load data
% Lab says use readtable, then convert to array
T = readtable("Section2.csv");
data = table2array(T);
% Assumption: column 1 is time/index, columns 2:11 are the 10 subjects
X = data(:,2:11); % Nx10 matrix (each column = one subject)
%% =========================
% [3.2] Central tendency + dispersion for each subject
% mean, median, range, std (one value per subject/column)
% =========================
mean_data = mean(X, ‘omitnan’); % 1×10
median_data = median(X, ‘omitnan’); % 1×10
% Use max-min for range (robust if MATLAB "range" gives issues)
range_data = max(X, [], 1) – min(X, [], 1); % 1×10
std_data = std(X, 0, ‘omitnan’); % 1×10 (0 = sample std)
%% =========================
% [3.2] Find which subject has max/min for each metric
% Lab asks to do this by code (not by looking)
% =========================
[max_mean, subj_max_mean] = max(mean_data);
[min_mean, subj_min_mean] = min(mean_data);
[max_median, subj_max_median] = max(median_data);
[min_median, subj_min_median] = min(median_data);
[max_range, subj_max_range] = max(range_data);
[min_range, subj_min_range] = min(range_data);
[max_std, subj_max_std] = max(std_data);
[min_std, subj_min_std] = min(std_data);
% Display summary
fprintf(‘n=== Subject Extremes (Subjects 1-10) ===n’);
fprintf(‘Mean : max = %.2f (Subject %d), min = %.2f (Subject %d)n’, …
max_mean, subj_max_mean, min_mean, subj_min_mean);
fprintf(‘Median : max = %.2f (Subject %d), min = %.2f (Subject %d)n’, …
max_median, subj_max_median, min_median, subj_min_median);
fprintf(‘Range : max = %.2f (Subject %d), min = %.2f (Subject %d)n’, …
max_range, subj_max_range, min_range, subj_min_range);
fprintf(‘Std Dev: max = %.2f (Subject %d), min = %.2f (Subject %d)n’, …
max_std, subj_max_std, min_std, subj_min_std);
%% =========================
% [3.2] Time spent > (mean + 20 bpm), in HOURS
% Lab says each row = 5 seconds
% =========================
function hours = time_over(colVec, meanVal)
% colVec = one subject’s HR time series (Nx1)
% meanVal = mean HR for that subject
count = sum(colVec > (meanVal + 20)); % number of samples above threshold
hours = (count * 5) / 3600; % 5 s/sample -> hours
end
% Compute for all 10 subjects
hours_over = zeros(1,10);
for s = 1:10
hours_over(s) = time_over(X(:,s), mean_data(s));
end
% Optional: display results neatly
fprintf(‘n=== Time above (mean + 20 bpm) ===n’);
for s = 1:10
fprintf(‘Subject %d: %.3f hoursn’, s, hours_over(s));
end
%% Optional: put everything into a table (nice for report/checking)
subject_id = (1:10)’;
resultsTbl = table(subject_id, mean_data’, median_data’, range_data’, std_data’, hours_over’, …
‘VariableNames’, {‘Subject’,’MeanHR’,’MedianHR’,’RangeHR’,’StdHR’,’HoursAboveMeanPlus20′});
disp(resultsTbl); multichannel statistical analysis MATLAB Answers — New Questions
Assessment failure in Task 1 of Solar Energy module: P_AC signal shows incorrect at the end of simulation.
I am experiencing a persistent assessment failure in the "Solar Energy" section, Task 1, of the Power Systems Simulation Onramp course.
The Problem: I have connected the signal labeled P_AC to the Signal Assessment block as instructed. The connection in the model is a solid black line, yet the assessment returns "Incorrect" with a hint suggesting an unconnected signal.
Symptoms:
The Assessment graph shows red dots (incorrect data) specifically at the end of the simulation period (approx. seconds 23-25).
I have confirmed that the Enable MPPT block is set to 0.
Steps already taken:
Deleted and re-connected the P_AC signal multiple times to ensure a solid connection.
Ran the simulation until completion (100% ready).
Used the "Reset" button for the task and re-attempted.
Could you please check if this is a known issue with the auto-grader for this specific task?I am experiencing a persistent assessment failure in the "Solar Energy" section, Task 1, of the Power Systems Simulation Onramp course.
The Problem: I have connected the signal labeled P_AC to the Signal Assessment block as instructed. The connection in the model is a solid black line, yet the assessment returns "Incorrect" with a hint suggesting an unconnected signal.
Symptoms:
The Assessment graph shows red dots (incorrect data) specifically at the end of the simulation period (approx. seconds 23-25).
I have confirmed that the Enable MPPT block is set to 0.
Steps already taken:
Deleted and re-connected the P_AC signal multiple times to ensure a solid connection.
Ran the simulation until completion (100% ready).
Used the "Reset" button for the task and re-attempted.
Could you please check if this is a known issue with the auto-grader for this specific task? I am experiencing a persistent assessment failure in the "Solar Energy" section, Task 1, of the Power Systems Simulation Onramp course.
The Problem: I have connected the signal labeled P_AC to the Signal Assessment block as instructed. The connection in the model is a solid black line, yet the assessment returns "Incorrect" with a hint suggesting an unconnected signal.
Symptoms:
The Assessment graph shows red dots (incorrect data) specifically at the end of the simulation period (approx. seconds 23-25).
I have confirmed that the Enable MPPT block is set to 0.
Steps already taken:
Deleted and re-connected the P_AC signal multiple times to ensure a solid connection.
Ran the simulation until completion (100% ready).
Used the "Reset" button for the task and re-attempted.
Could you please check if this is a known issue with the auto-grader for this specific task? power systems simulation onramp, mppt, signal asse MATLAB Answers — New Questions
Matlab slows down when the window is minimized
Hi everyone,
I’m running some heavy code and found that when I reduce the Matlab window, the code execution significantly slows down. How can I address this issue? I’ve already tried using the task manager to give Matlab a higher priority, but it didn’t work. Thank you in advance.Hi everyone,
I’m running some heavy code and found that when I reduce the Matlab window, the code execution significantly slows down. How can I address this issue? I’ve already tried using the task manager to give Matlab a higher priority, but it didn’t work. Thank you in advance. Hi everyone,
I’m running some heavy code and found that when I reduce the Matlab window, the code execution significantly slows down. How can I address this issue? I’ve already tried using the task manager to give Matlab a higher priority, but it didn’t work. Thank you in advance. speed, code execution slows down MATLAB Answers — New Questions
Up Chirp and Down Chirp Generation in a Single plot
I am trying to generate a chirp signal with both up chirp and down chirp for one of my project, upchirp – which has start frequency of 57 GHz and Bandwidth of 150MHz then after 2microseconds and again have to generate down chirp with the same bandwidth and frequencyI am trying to generate a chirp signal with both up chirp and down chirp for one of my project, upchirp – which has start frequency of 57 GHz and Bandwidth of 150MHz then after 2microseconds and again have to generate down chirp with the same bandwidth and frequency I am trying to generate a chirp signal with both up chirp and down chirp for one of my project, upchirp – which has start frequency of 57 GHz and Bandwidth of 150MHz then after 2microseconds and again have to generate down chirp with the same bandwidth and frequency fmcw, radar, fft MATLAB Answers — New Questions
lqr controller for purely magnetic actuation of 3u cubesat
im designing a lqr controller for a time variant system for purely magnetic actuation. i am using an algorithm where we use hamiltonian and the symplectic property(schur decomposition) to find out the solution of the algebraic riccati equation P. I need help to implement this in matlab/simulink.im designing a lqr controller for a time variant system for purely magnetic actuation. i am using an algorithm where we use hamiltonian and the symplectic property(schur decomposition) to find out the solution of the algebraic riccati equation P. I need help to implement this in matlab/simulink. im designing a lqr controller for a time variant system for purely magnetic actuation. i am using an algorithm where we use hamiltonian and the symplectic property(schur decomposition) to find out the solution of the algebraic riccati equation P. I need help to implement this in matlab/simulink. control_systems, lqr_design MATLAB Answers — New Questions
Parsing mfunctions with mtree
Given an Nx1 colum vector of strings containing lines of mfunction code, it is possible to use the undocumented function mtree() to parse it into its constituent functions. For example, given,
load Inputs
str1,
the function below will find the starting and ending lines of each of the three function blocks.
[firstLine,lastLine] = functionBlocks(str1)
This works for nested functions as well:
str2,
[firstLine,lastLine] = functionBlocks(str2)
However, I would really like to have the function be able to group the output separately into top-level function blocks and nested function blocks. I know it is possible to do this through a post-analysis of the outputs firstLine and lastLine, but I wonder if it is possible to get information about whether a function block is top-level or nested directly from mtree, i.e., by modifying,
fnSet = T.mtfind(‘Kind’,’FUNCTION’);
Unfortunately, because mtree() is undocumented, it is difficult to fathom its full capabilities. Does anyone know if/how this can be done?
function [firstLine,lastLine] = functionBlocks(str)
T = mtree(strjoin(str,newline));
fnSet = T.mtfind(‘Kind’,’FUNCTION’);
K=fnSet.count;
kset=fnSet.indices;
for k = K:-1:1
fn = fnSet.select(kset(k));
firstLine(k) = fn.lineno;
lastLine(k) = fn.lastone;
end
endGiven an Nx1 colum vector of strings containing lines of mfunction code, it is possible to use the undocumented function mtree() to parse it into its constituent functions. For example, given,
load Inputs
str1,
the function below will find the starting and ending lines of each of the three function blocks.
[firstLine,lastLine] = functionBlocks(str1)
This works for nested functions as well:
str2,
[firstLine,lastLine] = functionBlocks(str2)
However, I would really like to have the function be able to group the output separately into top-level function blocks and nested function blocks. I know it is possible to do this through a post-analysis of the outputs firstLine and lastLine, but I wonder if it is possible to get information about whether a function block is top-level or nested directly from mtree, i.e., by modifying,
fnSet = T.mtfind(‘Kind’,’FUNCTION’);
Unfortunately, because mtree() is undocumented, it is difficult to fathom its full capabilities. Does anyone know if/how this can be done?
function [firstLine,lastLine] = functionBlocks(str)
T = mtree(strjoin(str,newline));
fnSet = T.mtfind(‘Kind’,’FUNCTION’);
K=fnSet.count;
kset=fnSet.indices;
for k = K:-1:1
fn = fnSet.select(kset(k));
firstLine(k) = fn.lineno;
lastLine(k) = fn.lastone;
end
end Given an Nx1 colum vector of strings containing lines of mfunction code, it is possible to use the undocumented function mtree() to parse it into its constituent functions. For example, given,
load Inputs
str1,
the function below will find the starting and ending lines of each of the three function blocks.
[firstLine,lastLine] = functionBlocks(str1)
This works for nested functions as well:
str2,
[firstLine,lastLine] = functionBlocks(str2)
However, I would really like to have the function be able to group the output separately into top-level function blocks and nested function blocks. I know it is possible to do this through a post-analysis of the outputs firstLine and lastLine, but I wonder if it is possible to get information about whether a function block is top-level or nested directly from mtree, i.e., by modifying,
fnSet = T.mtfind(‘Kind’,’FUNCTION’);
Unfortunately, because mtree() is undocumented, it is difficult to fathom its full capabilities. Does anyone know if/how this can be done?
function [firstLine,lastLine] = functionBlocks(str)
T = mtree(strjoin(str,newline));
fnSet = T.mtfind(‘Kind’,’FUNCTION’);
K=fnSet.count;
kset=fnSet.indices;
for k = K:-1:1
fn = fnSet.select(kset(k));
firstLine(k) = fn.lineno;
lastLine(k) = fn.lastone;
end
end mtree, undocumented, parsing, functions MATLAB Answers — New Questions
Microsoft Takes Aim at ChatGPT
Comparing Microsoft 365 Copilot and ChatGPT Enterprise
Given the surprisingly small number of paid Microsoft 365 Copilot seats (15 million) revealed by Microsoft in their FY26 Q2 results, it is unsurprising that Microsoft should start to compete more openly with OpenAI, especially for Microsoft 365 tenants. The latest initiative is a comparison between Microsoft 365 Copilot and ChatGPT Enterprise with the tagline that “not all AI is built for work.”

OpenAI has tools to allow customers to connect SharePoint Online and OneDrive for Business to ChatGPT. The temptation therefore exists for customers to conclude that something like OpenAI’s SharePoint connector is all that’s needed to leverage AI within a Microsoft 365 tenant.
Quite rightly, Microsoft disagrees, and they prove their point by describing some important areas where ChatGPT can’t deliver what Copilot can. Let’s examine what Microsoft says.
Teams Meetings
Microsoft says that Copilot “reasons over Teams meetings.” Well, Copilot reasons over the transcript generated by Teams meetings (even if the transcript is not retained after the meeting) to generate outputs like summaries and action items. The processing of Teams transcripts is a good example of how AI can effectively process a bounded set of information to generate value.
Because it’s dependent on the accuracy of the transcript, Copilot doesn’t get everything right in its summaries but overall, it does a good job. It’s worth noting that the Facilitator agent does much the same job of creating summaries and noting important points for Teams chats.
If you don’t want to use Microsoft 365 Copilot with Teams, a set of third-party notetaking apps exist that can connect to the audio stream of Teams meetings to generate their version of transcripts and summaries.
Entra ID
Microsoft says that Copilot “knows your organization.” If the organizational reporting structure is recorded accurately in Entra ID, Copilot can use that structure to understand how individuals are connected within the organization. Copilot maps the organizational information into the semantic index to enhance its search capabilities.
The organizational data available in Entra ID is available through the Microsoft Graph, and it wouldn’t take much for OpenAI to include some code to retrieve the information and use that knowledge to create something like the Org Explorer. This isn’t the same as the semantic index, but the basics are there. The challenge for OpenAI would then be how to maintain an accurate picture of Entra ID structures for its enterprise customers.
Sensitivity Labels and Encrypted Files
Microsoft says that Copilot “enforces sensitivity labels” to keep “sensitive data protected.” It’s true that ChatGPT cannot process files protected by sensitivity labels (with encryption) because ChatGPT has no ability to open those files. Sensitivity labels use Azure Rights Management as the basis for its protection, and ChatGPT has no way to prove that it has the right to open protected files on behalf of a user with rights.
Microsoft 365 Copilot depends on the DLP policy for Copilot to tell it not to process certain emails and files protected by sensitivity labels. Proving that bugs can undermine any software, a recent bug allowed Copilot to process sensitive emails and include their content in its responses.
Microsoft doesn’t mention Restricted Content Delivery (RCD), an incredibly important feature that stops Copilot using content from complete SharePoint Online sites. The OpenAI connector simply uploads SharePoint files to process and doesn’t comply with RCD blocks.
SharePoint Pages
Microsoft notes that ChatGPT can’t process SharePoint pages. Because Copilot can process any information available to it via Microsoft Search, it can process SharePoint pages like news posts.
But Wait, There’s More
I guess Microsoft could have also pointed to its nascent agent ecosystem (aka Agent 365) and all that’s implied by that initiative, the addition of the Anthropic models, agents like Researcher, automatic summaries for Word documents, and so on.
The point here is that Microsoft 365 Copilot leverages much more of the information stored in Microsoft 365 workloads than ChatGPT can get to. Whether that’s worth the $360/year (list) per user is a decision that individual companies must make.
So much change, all the time. It’s a challenge to stay abreast of all the updates Microsoft makes across the Microsoft 365 ecosystem. Subscribe to the Office 365 for IT Pros eBook to receive insights updated monthly into what happens within Microsoft 365, why it happens, and what new features and capabilities mean for your tenant.
A milestone achievement in our journey to carbon negative
In 2020, Microsoft announced a moonshot commitment to become carbon negative by 2030 — accelerating work across our company to advance the partnerships and technologies needed to advance sustainability for our businesses, our customers and the world. A key milestone on this journey was our aim to match 100% of our annual global electricity consumption with renewable energy(1) by 2025. Today, we are pleased to share that Microsoft has achieved this milestone(2). This progress helps drive investment into the power systems where we operate, expand clean energy supply and advance broader energy innovation.
Over a decade of investment: 40 gigawatts of new renewable energy contracted
What began in 2013 with a single 110 megawatt (MW) power purchase agreement (PPA) in Texas — a small first step to demonstrate how corporate procurement could scale clean energy(3) — has evolved into one of the largest clean energy portfolios in the world. This first deal not only supported Microsoft’s early cloud services but also set in motion a decade of commercial partnerships and learning-by-doing that served to demonstrate how corporate demand for advanced energy solutions can help to achieve a more affordable and sustainable power system, while supporting reliability for customers.
Since our carbon negative announcement in 2020, we have contracted 40 gigawatts (GW) of new renewable energy supply across 26 countries, working with more than 95 utilities and developers across 400+ contracts and counting. To put that amount in perspective — that’s enough energy to power about 10 million US homes. Of that contracted volume, 19 GW are now online, delivering new clean energy supply to the power grid, while the remainder are slated to come online over the next five years.
Our new renewable energy procurement continues to deliver significant environmental benefits, including the reduction of Microsoft’s reported Scope 2 carbon dioxide emissions by an estimated 25 million tons(4) and the mobilization of billions of dollars’ worth of private investment in regions where we operate.
Catalyzing market investment through bankable, repeatable models
Microsoft is among the early pioneers in developing technical and commercial practices that help advance bankable, repeatable and scalable procurement tools suitable for each market. Our clean energy purchasing navigates a global patchwork of power market designs, requiring creativity in how we balance cost, time to market and project sizing in our portfolio across planning, contracting and management.
Our work has benefited from a broad coalition of partners helping to build this market together. According to Bloomberg New Energy Finance, more than 200 global corporations collectively purchased nearly 200 GW of clean energy around the world since 2008. Working alongside other clean energy buyers — as well as hundreds of utilities, manufacturers, financiers, developers and engineers — we have helped reduce transaction costs, expand developer access to financing and streamline procurement approaches that other buyers can adopt.
This global flywheel of partnership, investment, technology and policy innovation is expected to continue to facilitate billions of dollars’ worth of investment into infrastructure and jobs. And as we’ve seen repeatedly, when Microsoft sends a clear market signal for world-class, first-of-a-kind technologies and infrastructure, the power sector rises to the challenge. Our procurement over the past decade has demonstrated that partnerships, communities and innovation are essential ingredients that help to accelerate first-of-a-kind technologies and infrastructure at scale.
Scaling partnerships to scale infrastructure
Critical to Microsoft’s success in expanding digital infrastructure and supporting our local communities is our ability to build trusted partnerships with the over 95 global energy suppliers that support our clean energy portfolio. We have sourced clean energy through multiple requests for proposal or information, bilateral engagements and clean tariffs to evaluate over 5,000 unique carbon-free energy projects around the world.
Today, Microsoft has six energy company partners with which we have over 1 GW of contracted renewable energy capacity, and more than 20 energy supplier partners where each partner has at least five separate renewable energy projects with Microsoft — evidence of the durable, repeatable relationships necessary to scale clean energy. Combining scale with speed, Microsoft’s landmark 10.5 GW framework agreement with Brookfield sends a long-term, 2030 demand signal to the market that enables developers to raise funding more efficiently, bolster supply chains, hire engineers and construct world-class energy infrastructure.
Putting communities first
Our renewable energy procurement has mobilized billions of dollars in private investment, supported thousands of jobs across the communities where we operate and delivered meaningful co-benefits. Through partnerships with developers and nonprofit organizations, we’ve worked to embed community-driven benefits into our energy portfolio. These benefits include robust infrastructure, economic inclusion and support for community-focused organizations.
Our support for communities shows up in projects like our 500 MW PPA with Sol Systems, or our 250 MW PPA with Volt Energy Utility that provided local training and jobs, as well as grants to community nonprofit organizations and habitat restoration. We’ve also signed over 1.5 GW of distributed solar, bringing clean energy directly into hundreds of communities around the world. Landmark agreements like our 500 MW offtake with Pivot Energy, or our 270 MW offtake with PowerTrust are expected to foster employment, energy cost savings and grid resilience in communities across the United States, Mexico and Brazil. More details on the above examples and our approach to community benefits in clean energy agreements can be found in a dedicated Microsoft whitepaper.
Innovation unlocks new markets and pathways
Microsoft’s clean energy procurement continues to play an important role in catalyzing technical, commercial and regulatory innovation. Our commercial efforts have helped lower barriers to entry into new markets and expand access into multi-technology contracts that accelerate decarbonization.
In Japan, Microsoft signed one of the first corporate PPAs in the country’s restructured power market. Our 25 MW, 20-year agreement with Shizen represents the first single-asset virtual PPA executed in the country, which helped pave the way to over 2GWs of corporate procurement since 2024, according to Bloomberg New Energy Finance. Alongside opening new markets, we have structured several multi-technology offtakes in nascent markets for corporate procurement. In India, Microsoft purchased a combined 437 MW solar/wind hybrid offtake from Renew, where our projects will support energy access and rural electrification. In Microsoft’s home state of Washington, our datacenters in Douglas County are supplied by 100% carbon-free energy, as we leverage a creative blend of new wind power and hydropower storage to deliver around-the-clock clean energy.
Looking forward to 2030 and beyond
In 2025, the International Energy Agency (IEA) described a new “Age of Electricity,” marked by accelerating electricity demand from electric vehicles, air conditioners, data centers and heat pumps. As the world electrifies more of the economy, the demand for affordable, reliable and clean electricity will continue to rise.
Our experience building Microsoft’s clean energy portfolio both reflects and furthers global trends. According to IEA data, since 2000, renewable energy generation has expanded nearly four-fold. In many power markets across the world, clean energy is one of the fast-growing sources of generation, and often the one with the fastest time-to-market. Corporate buyers like Microsoft continue to serve as an important catalyst in driving commercial demand for innovation and infrastructure across the power industry.
As we continue our journey toward becoming carbon negative by 2030, Microsoft will continue to push for an expansive focus on adding all forms of carbon-free electricity solutions, complementing and adding to our portfolio of renewable energy resources. We recognize that the world’s rising electricity needs require a balanced, all-of-the-above decarbonization strategy to meet global economic growth and environmental goals, and our sustainability goals will continue to support this approach moving forward. Such a strategy requires a broader set of carbon-free energy and grid-enabling technologies, including nuclear energy, next-generation grid infrastructure and carbon capture technology. Just as renewable energy was a relatively small part of global energy grids in 2013 when we signed our first PPA, today many advanced energy technologies remain early in their development but offer significant promise to accelerate progress towards an affordable, reliable and sustainable energy future.
Microsoft has already taken early steps to support the advancement of a broader set of carbon-free energy technologies as we partner with Helion and Constellation Energy on a 50 MW fusion project in Washington state and work with Constellation to restart the 835 MW Crane Clean Energy Center in Pennsylvania. Microsoft’s Climate Innovation Fund has allocated $806 million of capital to 67 investees, with 38% directed toward Energy Systems — advancing carbon-free power and fuels, energy storage and energy management solutions.
We welcome continued collaboration with our power sector partners to bring these innovations to market and incorporate new technology tools in the process to accelerate their development.
We will continue to build and leverage new AI-driven tools to design, permit and deploy new power technologies that help expand and more efficiently operate the electricity grid, bringing more clean energy online faster. This work is exemplified by our recently announced collaborations with Idaho National Laboratory and the Midcontinental System Operator, among other examples.
And as we advance innovative energy technologies, we recognize that standards must evolve alongside innovation. That is why we will continue participating in industry forums that strengthen carbon accounting frameworks — so that our clean energy procurement is measured with greater accuracy and delivers real world emissions reductions, with a continued focus on maintaining the high level of integrity that the world has come to expect from Microsoft.
Our carbon negative commitment remains a call to action — for Microsoft, our customers and the broader technology sector — to invest in an affordable, reliable and sustainable power system. As we look toward 2030, that call to action has never been clearer.
Gratitude — and momentum for the work ahead
Today’s milestone represents a shared achievement among the utility professionals, clean energy developers, community leaders, technology innovators and forward-thinking policymakers who continue the deployment of renewable energy. Meeting today’s milestone shows what partnership can deliver in bringing big ideas to life. The future of carbon-free energy is one that we will create – together.
As Microsoft’s Chief Sustainability Officer, Melanie Nakagawa leads the company’s targets to be carbon negative, water positive, and zero waste by 2030. She brings deep experience at the intersection of policy, business, and technology to advance climate and sustainability solutions globally.
As President of Cloud Operations + Innovation at Microsoft, Noelle Walsh leads the organization that powers the global Microsoft Cloud. She oversees the company’s physical cloud infrastructure and operations, with a charter focused on safety, security, availability, sustainability, and competitive infrastructure growth—bringing decades of global operational leadership.
Footnotes
- Renewable energy is defined within Microsoft’s fact sheet https://aka.ms/SustainabilityFactsheet2025, which represents FY24 data.
- To date, Microsoft’s renewable energy target includes two primary categories: renewable energy from contracted projects and grid mix. The first is renewable energy delivered under PPAs or similar long-term contracting mechanisms, generally for new projects where our financial involvement in the project’s development is critical for its success. This category represents more than 90% of the renewable energy applied to achieve our 2025 target.The second category is “grid mix” – renewable energy supported via our standard utility relationships and rates, inclusive of policy programs such as renewable portfolio standards and state and utility decarbonization goals.Our 2025 100% renewable target does not include purchases from short-term, so-called “spot market” renewable energy credits (RECs) sourced from operational clean energy projects.With the above in mind, Microsoft leverages a straightforward formula to determine our 100% renewable energy metric on a global, annual basis. We update and further detail the methodology and assumptions behind this formula in our annual sustainability reports:

- Clean energy— also referred to in this blog as carbon free energy —is defined within Microsoft’s fact sheet https://aka.ms/SustainabilityFactsheet2025, which represents FY24 data.
- Reduction of reported Scope 2 emissions are calculated between FY20-25, the cumulative difference between location based and market-based emissions, excluding the use of short-term, so-called “spot market” RECs
The post A milestone achievement in our journey to carbon negative appeared first on The Official Microsoft Blog.
In 2020, Microsoft announced a moonshot commitment to become carbon negative by 2030 — accelerating work across our company to advance the partnerships and technologies needed to advance sustainability for our businesses, our customers and the world. A key milestone on this journey was our aim to match 100% of our annual global electricity consumption…
The post A milestone achievement in our journey to carbon negative appeared first on The Official Microsoft Blog.
Read More
How can I test my EtherCAT network outside of Simulink Real-Time (SLRT) to verify my EtherCAT configuration is okay?
I am using Simulink Real-Time with a Speedgoat target as my main device for my EtherCAT network. I have used Beckhoff’s TwinCAT3 to configure my ENI file that I plan to use.
I may also be having an issue with my subdevices not responding to commands or my EtherCAT network not getting to OP state, and I suspect my ENI configuration file may have issues.
How can I test this configuration?I am using Simulink Real-Time with a Speedgoat target as my main device for my EtherCAT network. I have used Beckhoff’s TwinCAT3 to configure my ENI file that I plan to use.
I may also be having an issue with my subdevices not responding to commands or my EtherCAT network not getting to OP state, and I suspect my ENI configuration file may have issues.
How can I test this configuration? I am using Simulink Real-Time with a Speedgoat target as my main device for my EtherCAT network. I have used Beckhoff’s TwinCAT3 to configure my ENI file that I plan to use.
I may also be having an issue with my subdevices not responding to commands or my EtherCAT network not getting to OP state, and I suspect my ENI configuration file may have issues.
How can I test this configuration? ethercat, validation, slrt MATLAB Answers — New Questions
Matlab crashing after the launching for few munites
I’m using the MATLAB2025b version and I got this feedback after crashing: Unable to communicate with required MathWorks services (error 5201).
For help with this issue, contact support:
https://www.mathworks.com/support/contact_us.html
Unable to launch MVM server: License Error: Licensing shutdownI’m using the MATLAB2025b version and I got this feedback after crashing: Unable to communicate with required MathWorks services (error 5201).
For help with this issue, contact support:
https://www.mathworks.com/support/contact_us.html
Unable to launch MVM server: License Error: Licensing shutdown I’m using the MATLAB2025b version and I got this feedback after crashing: Unable to communicate with required MathWorks services (error 5201).
For help with this issue, contact support:
https://www.mathworks.com/support/contact_us.html
Unable to launch MVM server: License Error: Licensing shutdown stop working or crash MATLAB Answers — New Questions
Damping constant in General flexible Beam
Hi.
I’m using the General flexible Beam model given in Simscape Multibody. In which you can define the "Damping constant (beta)" in s.
My problem is that I don’t know a damping coeffizient with seconds as the unit. I want to implement results from real life experiments out of which I calculated the logarithmic decrement in s^-1.
So my main question is, what kind of damping coefficient is used in the simscape model as I want to implement my test results?
Thanks and best regardsHi.
I’m using the General flexible Beam model given in Simscape Multibody. In which you can define the "Damping constant (beta)" in s.
My problem is that I don’t know a damping coeffizient with seconds as the unit. I want to implement results from real life experiments out of which I calculated the logarithmic decrement in s^-1.
So my main question is, what kind of damping coefficient is used in the simscape model as I want to implement my test results?
Thanks and best regards Hi.
I’m using the General flexible Beam model given in Simscape Multibody. In which you can define the "Damping constant (beta)" in s.
My problem is that I don’t know a damping coeffizient with seconds as the unit. I want to implement results from real life experiments out of which I calculated the logarithmic decrement in s^-1.
So my main question is, what kind of damping coefficient is used in the simscape model as I want to implement my test results?
Thanks and best regards simscape multibody, damping MATLAB Answers — New Questions
How to show Robotics System Toolbox (RigidBodyTree) visualization with App Designer?
How to show MATLAB Robotics System Toolbox (RigidBodyTree) visualization with App Designer?
Uiaxes does not support the robotics.RigidBodyTree.show right now, are there any other options?
This is the mentioned property:
https://www.mathworks.com/help/robotics/ref/rigidbodytree.show.htmlHow to show MATLAB Robotics System Toolbox (RigidBodyTree) visualization with App Designer?
Uiaxes does not support the robotics.RigidBodyTree.show right now, are there any other options?
This is the mentioned property:
https://www.mathworks.com/help/robotics/ref/rigidbodytree.show.html How to show MATLAB Robotics System Toolbox (RigidBodyTree) visualization with App Designer?
Uiaxes does not support the robotics.RigidBodyTree.show right now, are there any other options?
This is the mentioned property:
https://www.mathworks.com/help/robotics/ref/rigidbodytree.show.html app, rigidbodytree MATLAB Answers — New Questions
Using Dev Proxy with the Microsoft Graph PowerShell SDK
Use Dev Proxy to Detect Common Problems in SDK Scripts
Dev Proxy is a Microsoft API simulator to help developers test cloud applications. That doesn’t sound very interesting to Microsoft 365 tenant administrators, but after reading a series of LinkedIn posts by Waldek Mastykarz (who has the splendid title of “AI Coding Agents Advocate at Microsoft), I decided to have a look at what Dev Proxy does.
In his posts (here’s an example), Waldek explains how Dev Proxy helps with issues like excessive permissions, poor use of select to minimize data fetched by Graph requests, and improper pagination. Although the Microsoft Graph PowerShell SDK takes care of a lot of Graph housekeeping, permissions and performance are problems faced by people who write PowerShell scripts based on SDK cmdlets.
Getting Dev Proxy Installed on a PC
The Dev Proxy documentation explains how to install the proxy using WinGet. I recommend that you also install the Dev Proxy Toolkit extension for Visual Studio Code because it makes it easier to edit the JSON configuration files used by the proxy. Of course, you can edit the JSON files with Notepad, but Visual Studio Code is the smarter option.
Plugins, Configuration Files, and the Graph
Dev Proxy uses a plugin architecture. The plugins are defined in configuration files, and each plugin instructs the proxy about some form of behavior to monitor. From the perspective of the Microsoft Graph PowerShell SDK, we’re interested in plugins like GraphMinimalPermissionsGuidance, to observe the permissions available to an application and report whether the application is overly-permissioned, and GraphSelectGuidance, which checks the properties for each item fetched by Graph requests to highlight when performance can be improved by retrieving a smaller set of properties.
Dev Proxy comes with a set of standard configuration files. Microsoft recommends that you create your own configuration file instead of editing the standard files. If Dev Proxy finds a file called devproxyrc.json in the directory where you’re running the proxy from, it will use that, but you can create and use whatever configuration file you like. Microsoft recommends that you create files used with Dev Proxy in a separate folder.
For our purposes, the m365.json file at %localAppData%ProgramsDev ProxyConfig is a good starting point for a configuration file to test Microsoft 365 scripts. I copied portions of the m365 file over to my custom devproxyrc.json to get a configuration I was happy with. Figure 1 shows the file being edited with Visual Studio Code.

Monitoring Scripts with Dev Proxy
After tweaking the configuration file, we’re ready to monitor Graph requests. In a PowerShell session, type:
devproxy --record
Alternatively, put a record:true instruction in your Dev Proxy configuration file to have the proxy start up in record mode each time.
The proxy reads the configuration file to know what it should monitor and begins listening. As Graph requests are made by Microsoft Graph PowerShell SDK cmdlets or other applications, the proxy examines what happens and reports what it finds. I found it amusing that any time Outlook (classic) opens a message, it issues a Graph request to fetch the photo for the sender (Figure 2).

Pagination is not usually an issue for Microsoft Graph PowerShell SDK scripts. Most cmdlets that fetch information support an All parameter to instruct the Graph to return all available data. The cmdlet then takes care of processing nextlink URLs until it has retrieved all records. However, there are exceptions to the rule and Figure 3 shows an example. In this case, the script is fetching Exchange Online message trace data (see this article). The original call to the messageTraces endpoint is visible as is the call to fetch the next page of message trace results.

You can see that the Graph request created by the cmdlet is shown by Dev Proxy rather than the cmdlet. This is similar behavior to how the Debug parameter works when running Microsoft Graph PowerShell SDK cmdlets.
Restricting Properties with Select
An example of detecting when request performance could be improved by including Select to specify the properties of items to be fetched is shown in Figure 4. The top request is generated by Get-MgUser. The second is Get-MgUser with a -Select qualifier. Restricting the number of properties retrieved by requests won’t make much difference in small tenants. It will once you deal with thousands of objects.

The proxy generates the “skip” messages shown in these figures when a plugin doesn’t process a request for some reason (such as the beta plugin receiving a V1.0 request). To suppress the skip messages, make sure that the showSkipMessages instruction is set to false in your configuration file.
The Permissions Conundrum
Good security mandates that Microsoft Graph -based applicationss should use least-permission access. In other words, applications should be granted consent for the lowest possible level of permission required to do a job. It’s common to find that developers seek higher levels of permission “just in case,” through lack of knowledge, or because it’s hard to figure out the exact permissions. For instance, many do not know that the User.ReadBasic.All permission is available if an application only needs access to user properties like the display name and user principal name. In these circumstances, the higher User.Read.All permission is not required.
Dev Proxy observes the Graph requests made by an application and works out the permissions required to make those requests. At the end of a recording session, Dev Proxy reports the permissions held by an application that have not been used. If the application has fully exercised its functionality, the application doesn’t need the highlighted permissions and they can be removed.
Permission monitoring happens for app-only and interactive Graph SDK sessions. One of the characteristics of the Microsoft Graph Command Line Tools enterprise application, used to run interactive Graph sessions, is its propensity to accrue permissions over time. This isn’t a new issue. I first wrote about it in September 2021. The net effect is that interactive sessions usually have access to a bunch of delegated permissions. Figure 5 shows the report generated by Dev Proxy when it compared the permissions used in a session with those held by the application.

Of course, this is an outrageous example to prove the point. Applications should be checked to validate the permissions that they hold. Tenants should conduct regular reviews of permission assignments to ensure that standards hold, but developers should also check their permissions before launching their applications on the unwary.
The Connect Issue
One odd thing that I discovered is that I couldn’t run Connect-MgGraph in a PowerShell session when Dev Proxy was actively monitoring. All attempts failed with the message:
Connect-MgGraph: InteractiveBrowserCredential authentication failed:
However, if I stopped the proxy, I could run Connect-MgGraph, issue some commands to make sure that everything worked, and then restart the proxy in record mode. Odd.
Some Value for PowerShell Developers
Dev Proxy is a more useful to developers who build traditional applications than those who develop PowerShell scripts based on the Microsoft Graph PowerShell SDK. Even so, people do experience difficulties figuring out permissions and other issues with scripts based on the Microsoft Graph and any help is welcome. Dev Proxy is something that should be considered, if only for help to settle on the lowest possible permissions for any task.
Need help to write and manage PowerShell scripts for Microsoft 365, including Azure Automation runbooks? Get a copy of the Automating Microsoft 365 with PowerShell eBook, available standalone or as part of the Office 365 for IT Pros eBook bundle.
imuSensor and Allan Variance
Hello everyone,
I am creating an IMU simulation using the built-in imuSensor model in MATLAB. The block includes several parameters that define IMU noise characteristics, but I do not fully understand how these parameters relate to Allan variance–derived noise coefficients.
Here is the list of gyroscope parameters available in
——————————————————————————————
gyroparams with properties:
MeasurementRange: Inf rad/s
Resolution: 0 (rad/s)/LSB
ConstantBias: [0 0 0] rad/s
AxesMisalignment: [3⨯3 double] %
NoiseDensity: [0 0 0] (rad/s)/√Hz
BiasInstability: [0 0 0] rad/s
RandomWalk: [0 0 0] (rad/s)*√Hz
NoiseType: "double-sided"
BiasInstabilityCoefficients: [1⨯1 struct]
TemperatureBias: [0 0 0] (rad/s)/°C
TemperatureScaleFactor: [0 0 0] %/°C
AccelerationBias: [0 0 0] (rad/s)/(m/s²)
——————————————————————————————
I have estimated my sensor noise parameters from Allan variance analysis, specifically:
ARW (N)
Bias Instability (B)
Rate Random Walk (K)
My goal is to correctly map these Allan variance parameters N, B, and K to the corresponding imuSensor block parameters:
NoiseDensity
BiasInstability
RandomWalk
I would appreciate clarification on how these quantities correspond mathematically and physically, and how to correctly convert Allan variance results into the parameters expected by MATLAB’s IMU sensor model.Hello everyone,
I am creating an IMU simulation using the built-in imuSensor model in MATLAB. The block includes several parameters that define IMU noise characteristics, but I do not fully understand how these parameters relate to Allan variance–derived noise coefficients.
Here is the list of gyroscope parameters available in
——————————————————————————————
gyroparams with properties:
MeasurementRange: Inf rad/s
Resolution: 0 (rad/s)/LSB
ConstantBias: [0 0 0] rad/s
AxesMisalignment: [3⨯3 double] %
NoiseDensity: [0 0 0] (rad/s)/√Hz
BiasInstability: [0 0 0] rad/s
RandomWalk: [0 0 0] (rad/s)*√Hz
NoiseType: "double-sided"
BiasInstabilityCoefficients: [1⨯1 struct]
TemperatureBias: [0 0 0] (rad/s)/°C
TemperatureScaleFactor: [0 0 0] %/°C
AccelerationBias: [0 0 0] (rad/s)/(m/s²)
——————————————————————————————
I have estimated my sensor noise parameters from Allan variance analysis, specifically:
ARW (N)
Bias Instability (B)
Rate Random Walk (K)
My goal is to correctly map these Allan variance parameters N, B, and K to the corresponding imuSensor block parameters:
NoiseDensity
BiasInstability
RandomWalk
I would appreciate clarification on how these quantities correspond mathematically and physically, and how to correctly convert Allan variance results into the parameters expected by MATLAB’s IMU sensor model. Hello everyone,
I am creating an IMU simulation using the built-in imuSensor model in MATLAB. The block includes several parameters that define IMU noise characteristics, but I do not fully understand how these parameters relate to Allan variance–derived noise coefficients.
Here is the list of gyroscope parameters available in
——————————————————————————————
gyroparams with properties:
MeasurementRange: Inf rad/s
Resolution: 0 (rad/s)/LSB
ConstantBias: [0 0 0] rad/s
AxesMisalignment: [3⨯3 double] %
NoiseDensity: [0 0 0] (rad/s)/√Hz
BiasInstability: [0 0 0] rad/s
RandomWalk: [0 0 0] (rad/s)*√Hz
NoiseType: "double-sided"
BiasInstabilityCoefficients: [1⨯1 struct]
TemperatureBias: [0 0 0] (rad/s)/°C
TemperatureScaleFactor: [0 0 0] %/°C
AccelerationBias: [0 0 0] (rad/s)/(m/s²)
——————————————————————————————
I have estimated my sensor noise parameters from Allan variance analysis, specifically:
ARW (N)
Bias Instability (B)
Rate Random Walk (K)
My goal is to correctly map these Allan variance parameters N, B, and K to the corresponding imuSensor block parameters:
NoiseDensity
BiasInstability
RandomWalk
I would appreciate clarification on how these quantities correspond mathematically and physically, and how to correctly convert Allan variance results into the parameters expected by MATLAB’s IMU sensor model. allan variance imu sensör MATLAB Answers — New Questions
Problem replicating the Venturi effect (pressure rise from small area to large area) with Simscape Gas
I’m trying to create a minimal model in Simscape to replicate the Venturi effect in a Simscape Gas model: Gas is flowing from point A to point B, and the cross-sectional area of the duct at point A is smaller than B; as a result, the static pressure at A must be lower than B (assuming negligible energy loss from A to B).
I tried two different setups to replicate this. First, I used a "Local Restriction (G)" block (with a fixed restriction area), which has this orifice structure built in. The flow is provided by a Flow Rate Source block, and I added a Flow Resistance block downstream to allow the pressure at the port B of the Local Restriction to vary. Here is a screenshot of the model:
I expect that at least for some combination of restriction area and flow rate, the pressure at the restriction, p_R, must be lower than port B. I tried running the simulation with different values of flow rate and restriction area, but p_R was always higher than pressure at B.
Next, I used two consecutive Pipe (G) elements: the first one with a smaller surface area and hydraulic diameter, and the second with larger values for both. The rest of the setup is similar:
Again, I expect the internal pressure of the Small Pipe to be lower than the internal pressure of the Big Pipe, but this was never the case. I ran the simulation for different values of pipe surface area and hydraulic diameter for the two pipes, and kept the pipe length, internal surface roughness, and the laminar friction constants low to reduce the pressure loss due to friction. I also tried reducing the dynamic viscosity in the Gas Properties block, and disabling/enabling gas compressibility in the pipes.
In both models, I kept the rest of the settings and parameters as default (e.g., perfect gas with properties of dry air, daessc solver, etc.). The models are attached.
Am I doing something wrong, or are there any limitations and theoretical assumptions in Simscape Gas that do not allow replicating this effect?
Thanks in advance!I’m trying to create a minimal model in Simscape to replicate the Venturi effect in a Simscape Gas model: Gas is flowing from point A to point B, and the cross-sectional area of the duct at point A is smaller than B; as a result, the static pressure at A must be lower than B (assuming negligible energy loss from A to B).
I tried two different setups to replicate this. First, I used a "Local Restriction (G)" block (with a fixed restriction area), which has this orifice structure built in. The flow is provided by a Flow Rate Source block, and I added a Flow Resistance block downstream to allow the pressure at the port B of the Local Restriction to vary. Here is a screenshot of the model:
I expect that at least for some combination of restriction area and flow rate, the pressure at the restriction, p_R, must be lower than port B. I tried running the simulation with different values of flow rate and restriction area, but p_R was always higher than pressure at B.
Next, I used two consecutive Pipe (G) elements: the first one with a smaller surface area and hydraulic diameter, and the second with larger values for both. The rest of the setup is similar:
Again, I expect the internal pressure of the Small Pipe to be lower than the internal pressure of the Big Pipe, but this was never the case. I ran the simulation for different values of pipe surface area and hydraulic diameter for the two pipes, and kept the pipe length, internal surface roughness, and the laminar friction constants low to reduce the pressure loss due to friction. I also tried reducing the dynamic viscosity in the Gas Properties block, and disabling/enabling gas compressibility in the pipes.
In both models, I kept the rest of the settings and parameters as default (e.g., perfect gas with properties of dry air, daessc solver, etc.). The models are attached.
Am I doing something wrong, or are there any limitations and theoretical assumptions in Simscape Gas that do not allow replicating this effect?
Thanks in advance! I’m trying to create a minimal model in Simscape to replicate the Venturi effect in a Simscape Gas model: Gas is flowing from point A to point B, and the cross-sectional area of the duct at point A is smaller than B; as a result, the static pressure at A must be lower than B (assuming negligible energy loss from A to B).
I tried two different setups to replicate this. First, I used a "Local Restriction (G)" block (with a fixed restriction area), which has this orifice structure built in. The flow is provided by a Flow Rate Source block, and I added a Flow Resistance block downstream to allow the pressure at the port B of the Local Restriction to vary. Here is a screenshot of the model:
I expect that at least for some combination of restriction area and flow rate, the pressure at the restriction, p_R, must be lower than port B. I tried running the simulation with different values of flow rate and restriction area, but p_R was always higher than pressure at B.
Next, I used two consecutive Pipe (G) elements: the first one with a smaller surface area and hydraulic diameter, and the second with larger values for both. The rest of the setup is similar:
Again, I expect the internal pressure of the Small Pipe to be lower than the internal pressure of the Big Pipe, but this was never the case. I ran the simulation for different values of pipe surface area and hydraulic diameter for the two pipes, and kept the pipe length, internal surface roughness, and the laminar friction constants low to reduce the pressure loss due to friction. I also tried reducing the dynamic viscosity in the Gas Properties block, and disabling/enabling gas compressibility in the pipes.
In both models, I kept the rest of the settings and parameters as default (e.g., perfect gas with properties of dry air, daessc solver, etc.). The models are attached.
Am I doing something wrong, or are there any limitations and theoretical assumptions in Simscape Gas that do not allow replicating this effect?
Thanks in advance! simscape, gas, fluid dynamics, simulation MATLAB Answers — New Questions









