Month: October 2024
10 Days Left: Share Your Experience with Azure AI and Support a Charity
Siemens uses Azure AI to bridge the gap between field and shop floor workers and operations and engineering teams, improving problem resolution times and fostering cross-functional collaboration.
ASOS leverages Azure AI to enhance customer engagement through personalized fashion recommendations, making the shopping experience more interactive and enjoyable.
Perplexity.AI relies on Azure AI Studio to scale its conversational answer engine, delivering faster response times and reducing operational costs.
The reviewer must be an Azure AI customer and be a working enterprise professional including technology decision-makers, enterprise-level users, executives, and their teams (e.g., students and freelancers are not allowed to submit a review). The reviewer must: attest to the authenticity of the review by certifying that (i) he or she is not an employee of Microsoft a partner of Microsoft or a direct competitor of Microsoft; (ii) employed by an organization with an exclusive relationship with the product being reviewed (this includes exclusive partners, value-added resellers, system integrators, and consultants); and (iii) the feedback is based entirely on his or her own personal experience with Microsoft’s Azure AI.
Your privacy is paramount, as only your role, industry, and organization size will be displayed.
The offer is good only for those who submit a product review on Gartner Peer Insights and receive confirmation their review has been approved by Gartner. Limited to one reward per person. This offer is non-transferable and cannot be combined with any other offer. The offer is valid while supplies last. It is not redeemable for cash. Taxes, if there are any, are the sole responsibility of the recipient. Any gift returned as non-deliverable will not be re-sent. Please allow 6-8 weeks for shipment of your gift. Microsoft reserves the right to cancel, change, or suspend this offer at any time without notice. This offer is not applicable in Cuba, Iran, North Korea, Sudan, Syria, Crimea, Russia, and China. For more information, please refer to Gartner Peer Programs Community Guidelines.
Your Privacy Matters:
Microsoft Tech Community – Latest Blogs –Read More
Recap of Logic Apps Community Day 2024
On September 26th, 2024, the Logic Apps Aviators community gathered for an inspiring and informative Logic Apps Community Day. The event was filled with insightful sessions led by a diverse group of experts from your Aviators community. Below is a recap of the presentations and key takeaways from each session, complete with links in case you missed it or want to watch again! You can also view the playlist of presentations, including the entire session, on YouTube.
Igor Shvets: Unleash the Power of .NET C# in Azure Logic Apps
Igor Shvets, CEO of Avant-Garde 365, kicked off the event by showcasing how C# can be leveraged in Azure Logic Apps through various practical scenarios and examples. Igor’s session highlighted the flexibility and power of integrating .NET C# with Logic Apps.
Prashant Kumar Singh: BizTalk to AIS Migration – Learnings from the field
Next, Prashant Kumar Singh, a Solution Architect at HPE, shared his experiences and best practices for migrating from BizTalk to AIS. His insights were drawn from real-world projects, providing invaluable advice for those planning similar migrations.
Sebastian Meyer: Seamless SAP Integration with Azure Logic Apps
Sebastian Meyer, a consultant at QUIBIQ and a Microsoft Azure MVP, demonstrated how Azure Logic Apps can streamline SAP integration with cloud services. His presentation showcased examples of enhanced efficiency and synchronization.
Luis Rigueira: Workflow and Trigger expressions can help monitor your Logic Apps
Luis Rigueira, an Integration Developer at DevScope, delved into the power of workflow and trigger expressions for monitoring Logic Apps. He explained how these tools can help stay notified about failures, providing details on run times and app names.
Mattias Lögdberg: Master PaaS networking for AIS
Mattias Lögdberg from DevUP Solutions simplified the often complex world of PaaS networking options in Azure. His session clarified the different networking types and their appropriate use cases, making it easier for participants to navigate their choices.
Ankit Gupta: Mastering Azure Logic Apps: Renewing Lock Tokens in Peek-Lock Service Bus Triggers
Ankit Gupta, a Cloud Solution Architect at TCS, illustrated how to build more resilient workflows by renewing lock tokens in peek-lock service bus triggers. His practical examples provided a clear path to seamless message processing.
Cameron McKay: Transitioning from Pro to Low Code with Azure Logic Apps
Cameron McKay, a Cloud Architect at MNP LLP, offered guidance for pro code developers transitioning to low code environments. His focus on best practices for maintainable and reusable workflows was particularly valuable.
Angad Soni: Use Case: Predictive Maintenance for Manufacturing Equipment
Angad Soni from Long View Systems demonstrated how to implement predictive maintenance systems using Logic Apps. His session on preventing downtime through scheduled maintenance tasks was both informative and practical.
Sandro Pereira: Logic Apps: Everything you need to know about error handling
Sandro Pereira, a well-known figure in the Azure community, shared his extensive knowledge on creating robust error handling patterns in Logic Apps. His presentation was packed with actionable advice based on his vast experience.
Sabahat Faria: Integration between CE and FinOps using Logic Apps
Sabahat Faria from Systems Limited focused on integrating CRM systems using Logic Apps. She provided a detailed demo on uploading data to environments like Dataverse and D365 for Finance and Operations.
Paco de la Cruz: Extending your Azure Integration Solution with OpenAI
Paco de la Cruz from Deloitte Engineering Australia explored how to extend Azure integration solutions using OpenAI. His session illustrated the innovative applications of AI in enhancing integration projects.
Mick Badran: Using AI in Healthcare through Logic Apps
Finally, Mick Badran, a veteran Azure MVP, presented a real-life use case of using AI and Logic Apps in healthcare. His session demonstrated the potential of AI-driven diagnostics and vital monitoring to support healthcare providers.
Logic Apps Community Day 2024 was a remarkable event, filled with cutting-edge insights and practical knowledge. We extend our heartfelt thanks to all the speakers and participants who made this day a success. Stay tuned for more events and continue to post your innovations with Azure Logic Apps to #LogicAppsAviators!
Microsoft Tech Community – Latest Blogs –Read More
how to plot a graph of this equation?
Y=1/x^2+8Y=1/x^2+8 Y=1/x^2+8 y=1/x^2+8 MATLAB Answers — New Questions
Trouble using ODE45 for coupled non-linear differential equation
Hi,
I am attempting to model the back filling of a large vacuum vessel with Nitrogen. The flow of the Nitrogen is controlled by the valve with a certain valve flow coefficient. P2 is constant and approximately one atmosphere. I have derivived the following equations to model this.
So I wrote the following code with ODE45 to monitor the pressure over time.
% IC Vector
IC_BF = [P01, Q0, dP0, dQ0];
t = [0 10];
options = odeset(‘RelTol’,1e-12 );
[t_ode45,Result] = ode45(@BF_dyn, t, IC_BF, options)
function result = BF_dyn(t, x)
% Constants
delta_P0 = 6894.76; % Pa
P01 = 1.33322e-5; % pa
dP01 =0; % Pa
R = 8.314; % J / mol·K
C_v = 0.28*(6.309e-5/1); % (gallons/min) ( 6.309e-5 (m^3/s)/ 1 (gallons/min)) = m^3/s
Tamb = 293; % K
G = 0.967;
dewar_vol = 4.4; % m^3
rho_N2 = 1.25; %kg/m^3
M_N2 = 28.02*(1/1000); % g/mol * (1 kg/1000g) = kg/mol
P02 = 6894.76; % Pa
b = (sqrt(G)/(C_v))^2;
a = rho_N2*R*Tamb/(M_N2*dewar_vol);
result = [
-2*x(2)*x(4)*b;
(1/a)*x(4);
x(3);
x(4);
];
end
However, when I plot the results I get the following, which goes significantly higher than P2 before attempting to go negative when in reality it should asymptote out to P2.
Am I implement this system into ODE45 wrong? Is there any way to incorporate the realtionship between P1 and P2 into ODE45 by adding additional arguement to my results vector?
Thank you for any assistance you can offer!Hi,
I am attempting to model the back filling of a large vacuum vessel with Nitrogen. The flow of the Nitrogen is controlled by the valve with a certain valve flow coefficient. P2 is constant and approximately one atmosphere. I have derivived the following equations to model this.
So I wrote the following code with ODE45 to monitor the pressure over time.
% IC Vector
IC_BF = [P01, Q0, dP0, dQ0];
t = [0 10];
options = odeset(‘RelTol’,1e-12 );
[t_ode45,Result] = ode45(@BF_dyn, t, IC_BF, options)
function result = BF_dyn(t, x)
% Constants
delta_P0 = 6894.76; % Pa
P01 = 1.33322e-5; % pa
dP01 =0; % Pa
R = 8.314; % J / mol·K
C_v = 0.28*(6.309e-5/1); % (gallons/min) ( 6.309e-5 (m^3/s)/ 1 (gallons/min)) = m^3/s
Tamb = 293; % K
G = 0.967;
dewar_vol = 4.4; % m^3
rho_N2 = 1.25; %kg/m^3
M_N2 = 28.02*(1/1000); % g/mol * (1 kg/1000g) = kg/mol
P02 = 6894.76; % Pa
b = (sqrt(G)/(C_v))^2;
a = rho_N2*R*Tamb/(M_N2*dewar_vol);
result = [
-2*x(2)*x(4)*b;
(1/a)*x(4);
x(3);
x(4);
];
end
However, when I plot the results I get the following, which goes significantly higher than P2 before attempting to go negative when in reality it should asymptote out to P2.
Am I implement this system into ODE45 wrong? Is there any way to incorporate the realtionship between P1 and P2 into ODE45 by adding additional arguement to my results vector?
Thank you for any assistance you can offer! Hi,
I am attempting to model the back filling of a large vacuum vessel with Nitrogen. The flow of the Nitrogen is controlled by the valve with a certain valve flow coefficient. P2 is constant and approximately one atmosphere. I have derivived the following equations to model this.
So I wrote the following code with ODE45 to monitor the pressure over time.
% IC Vector
IC_BF = [P01, Q0, dP0, dQ0];
t = [0 10];
options = odeset(‘RelTol’,1e-12 );
[t_ode45,Result] = ode45(@BF_dyn, t, IC_BF, options)
function result = BF_dyn(t, x)
% Constants
delta_P0 = 6894.76; % Pa
P01 = 1.33322e-5; % pa
dP01 =0; % Pa
R = 8.314; % J / mol·K
C_v = 0.28*(6.309e-5/1); % (gallons/min) ( 6.309e-5 (m^3/s)/ 1 (gallons/min)) = m^3/s
Tamb = 293; % K
G = 0.967;
dewar_vol = 4.4; % m^3
rho_N2 = 1.25; %kg/m^3
M_N2 = 28.02*(1/1000); % g/mol * (1 kg/1000g) = kg/mol
P02 = 6894.76; % Pa
b = (sqrt(G)/(C_v))^2;
a = rho_N2*R*Tamb/(M_N2*dewar_vol);
result = [
-2*x(2)*x(4)*b;
(1/a)*x(4);
x(3);
x(4);
];
end
However, when I plot the results I get the following, which goes significantly higher than P2 before attempting to go negative when in reality it should asymptote out to P2.
Am I implement this system into ODE45 wrong? Is there any way to incorporate the realtionship between P1 and P2 into ODE45 by adding additional arguement to my results vector?
Thank you for any assistance you can offer! ode45 MATLAB Answers — New Questions
loss, the classification error
I’m using the "loss" function when I calculate a classification error.
Below is a confusion matrix that has one mis classification.
In abvoe case, I think the loss should be 1/7*100 = 14.3 %.
But the "loss" function shows 15.9 %.
It seems the "loss"’ function has a special logic to calculate loss.
So, I’d like to know what it is. And, if possible, have loss value same as 14.3 % by modifying option in "loss" function.I’m using the "loss" function when I calculate a classification error.
Below is a confusion matrix that has one mis classification.
In abvoe case, I think the loss should be 1/7*100 = 14.3 %.
But the "loss" function shows 15.9 %.
It seems the "loss"’ function has a special logic to calculate loss.
So, I’d like to know what it is. And, if possible, have loss value same as 14.3 % by modifying option in "loss" function. I’m using the "loss" function when I calculate a classification error.
Below is a confusion matrix that has one mis classification.
In abvoe case, I think the loss should be 1/7*100 = 14.3 %.
But the "loss" function shows 15.9 %.
It seems the "loss"’ function has a special logic to calculate loss.
So, I’d like to know what it is. And, if possible, have loss value same as 14.3 % by modifying option in "loss" function. loss classification error MATLAB Answers — New Questions
volumeViewer Display Parameter Adjustment?
Hello,
here’s my 2D image Display (1 slice of 3D data) using imtool:
If I display the 3D file using volumeViewer, it looks like this:
How to make the gray square to be black like the rest of the background in the 3D volumeViewer? Is there a parameter I can use or adjust in volumeViewer function?
Thank you!Hello,
here’s my 2D image Display (1 slice of 3D data) using imtool:
If I display the 3D file using volumeViewer, it looks like this:
How to make the gray square to be black like the rest of the background in the 3D volumeViewer? Is there a parameter I can use or adjust in volumeViewer function?
Thank you! Hello,
here’s my 2D image Display (1 slice of 3D data) using imtool:
If I display the 3D file using volumeViewer, it looks like this:
How to make the gray square to be black like the rest of the background in the 3D volumeViewer? Is there a parameter I can use or adjust in volumeViewer function?
Thank you! volumeviewer MATLAB Answers — New Questions
New Blog | New Copilot for Security Plugin Name Reflects Broader Capabilities
The Copilot for Security team is continuously enhancing threat intelligence (TI) capabilities in Copilot for Security to provide a more comprehensive and integrated TI experience for customers. We’re excited to share that the Copilot for Security threat Intelligence plugin has broadened beyond just MDTI to now encapsulate data from other TI sources, including Microsoft Threat Analytics (TA) and SONAR, with even more sources becoming available soon.
To reflect this evolution of the plugin, customers may notice a change in its name from “Microsoft Defender Threat Intelligence (MDTI) to “Microsoft Threat Intelligence,” reflecting its broader scope and enhanced capabilities.
Since launch in April, Copilot for Security customers have been able to access, operate on, and integrate the raw and finished threat intelligence from MDTI developed from trillions of daily security signals and the expertise of over 10 thousand multidisciplinary analysts through simple natural language prompts. Now, with the ability for Copilot for Security’s powerful generative AI to reason over more threat intelligence, customers have a more holistic, contextualized view of the threat landscape and its impact on their organization.
Read the full post here: New Copilot for Security Plugin Name Reflects Broader Capabilities
By Michael Browning
The Copilot for Security team is continuously enhancing threat intelligence (TI) capabilities in Copilot for Security to provide a more comprehensive and integrated TI experience for customers. We’re excited to share that the Copilot for Security threat Intelligence plugin has broadened beyond just MDTI to now encapsulate data from other TI sources, including Microsoft Threat Analytics (TA) and SONAR, with even more sources becoming available soon.
To reflect this evolution of the plugin, customers may notice a change in its name from “Microsoft Defender Threat Intelligence (MDTI) to “Microsoft Threat Intelligence,” reflecting its broader scope and enhanced capabilities.
Since launch in April, Copilot for Security customers have been able to access, operate on, and integrate the raw and finished threat intelligence from MDTI developed from trillions of daily security signals and the expertise of over 10 thousand multidisciplinary analysts through simple natural language prompts. Now, with the ability for Copilot for Security’s powerful generative AI to reason over more threat intelligence, customers have a more holistic, contextualized view of the threat landscape and its impact on their organization.
Read the full post here: New Copilot for Security Plugin Name Reflects Broader Capabilities Read More
Anomaly Excessive NXDOMAIN DNS Queries – analytics rule
I have noticed that we see quite a few endpoints that are triggering the Excessive NXDOMAIN DNS Queries anomaly analytics rule in Microsoft Sentinel. When I investigate these for tuning purposes, I see that the vast majority of these queries (in the in-addr.arpa domain) are for IP addresses owned by Microsoft. It appears that Microsoft have no interest in publishing reverse DNS entries, because I am unable to resolve them from any online DNS tools. The whois records do point to Microsoft, though.
What’s a good way to either stop this from happening, or eliminate the Microsoft IP address space from the query results?
I have noticed that we see quite a few endpoints that are triggering the Excessive NXDOMAIN DNS Queries anomaly analytics rule in Microsoft Sentinel. When I investigate these for tuning purposes, I see that the vast majority of these queries (in the in-addr.arpa domain) are for IP addresses owned by Microsoft. It appears that Microsoft have no interest in publishing reverse DNS entries, because I am unable to resolve them from any online DNS tools. The whois records do point to Microsoft, though. What’s a good way to either stop this from happening, or eliminate the Microsoft IP address space from the query results? Read More
Formula needed
Hello
I am trying to find a formula. i want it to take an amount from a monthly expense sheet and put it into my income and expense sheet. My friend did one for the expense side. ive tried it for the income side of my balance sheet but it doesnt work. my friend used a formula of =`Income vs Expenditure`!cell number.
Any help would be great
Roger
Hello I am trying to find a formula. i want it to take an amount from a monthly expense sheet and put it into my income and expense sheet. My friend did one for the expense side. ive tried it for the income side of my balance sheet but it doesnt work. my friend used a formula of =`Income vs Expenditure`!cell number. Any help would be great Roger Read More
How to schedule a meeting with Copilot in Outlook
With Copilot, scheduling a meeting from an email thread is quick and simple. By selecting the Schedule with Copilot option, Copilot reviews the email and generates a meeting invitation. It automatically fills in the meeting title and agenda, and attaches the email thread. The participants from the email thread are added as attendees, allowing you to easily review, modify details if needed, and send out the invitation.
#MicrosoftCopilot #Copilot #AI #Microsoft365 #MPVbuzz
With Copilot, scheduling a meeting from an email thread is quick and simple. By selecting the Schedule with Copilot option, Copilot reviews the email and generates a meeting invitation. It automatically fills in the meeting title and agenda, and attaches the email thread. The participants from the email thread are added as attendees, allowing you to easily review, modify details if needed, and send out the invitation.
#MicrosoftCopilot #Copilot #AI #Microsoft365 #MPVbuzz Read More
help with a formula – i think vlookup and if?
Hi,
Im hoping someone could please help me with a formula that I am struggling with. I have two sheets in the file. One file is the main page that is used and the other page has the data. The main page contains 5 columns. 3 of the columns have existing data, column A is a unique number, column B is a ship to number and column C has the weight. Column D how to ship and column e what code I would like to get this information from a formula in the data tab. The data tab has 4 columns ship to, weight, how to ship and what code. How to ship is dependent on what the weight is.
UNIQUE NUMBERSHIP TOWEIGHTHOW TO SHIP
WHAT CODE
4084480P524060 4212822P5240175
SHIP TOWEIGHTHOW TO SHIPWHAT CODEP5240<=60AIRSPECIAL 1P5240>60 <=250AIRTO BE ADVISEDP5240>250SEAD13456
what combined formula can i use to retrieve this information for me?
Hi,Im hoping someone could please help me with a formula that I am struggling with. I have two sheets in the file. One file is the main page that is used and the other page has the data. The main page contains 5 columns. 3 of the columns have existing data, column A is a unique number, column B is a ship to number and column C has the weight. Column D how to ship and column e what code I would like to get this information from a formula in the data tab. The data tab has 4 columns ship to, weight, how to ship and what code. How to ship is dependent on what the weight is. UNIQUE NUMBERSHIP TOWEIGHTHOW TO SHIPWHAT CODE 4084480P524060 4212822P5240175 SHIP TOWEIGHTHOW TO SHIPWHAT CODEP5240<=60AIRSPECIAL 1P5240>60 <=250AIRTO BE ADVISEDP5240>250SEAD13456what combined formula can i use to retrieve this information for me? Read More
Unlocking the Future: Insights from DEVintersection 2024
The DEVintersection Conference, held recently in Las Vegas, showcased the latest in Microsoft technologies. Featuring keynotes from Microsoft leaders like Scott Hanselman, VP of Microsoft Development Community and Scott Hunter, VP Director of Product at Microsoft, on the Azure Developer Experience, along with insights from engineers, Microsoft MVPs & Regional Directors (RD), and industry experts, the event was a hub of innovation. DEVintersection founder and Microsoft MVP & RD Richard Campbell shares highlights from the event.
What inspired the creation of DEVintersection, and how has it evolved over the years?
Richard: The whole idea of intersection was to bring together different groups of developers so that they could learn from each other. In the early days of DEVintersection (2012-2015), there was a focus on open web technologies. We co-located a conference called AngleBrackets with an open web content focus and then gave space to mix the two audiences. Over the years, we’ve also included data with SQLIntersection and, for a time, the M365 Conference.
Today, DEVintersection co-locates with the NextGenAI Conference – the goal is to create an intersection between the different audiences and viewpoints to help foster a deeper understanding of both technology stacks.
Any time you can bring more ideas together, you get powerful results!
Do you have reflections on DEVIntersection’s growth over the years?
Richard: DEVintersection has evolved as the community has evolved – the demands on developers continues to grow, and we’re there to help. As new technologies emerge, we intersect them, bringing new experts and viewpoints to the DEVintersection audience.
How does Microsoft’s participation enhance the learning and networking opportunities for attendees at DEVintersection? Were there unique insights or sessions from Microsoft this year?
Richard: We see Microsoft as partners in telling the development story to the audience. We like to bring Microsoft engineers to the event so that they can talk about the how and why of what they’ve built. Then our industry experts add to the conversation with their experiences of deploying those technologies to customers. The intersection between industry and Microsoft is where the value lies – you can see the future from there!
Can you share any success stories from past attendees who have implemented what they learned at DEVintersection?
Richard: During the years of SQLintersection, I would host a panel discussion at the end of each of the conferences that we would record as a RunAs Radio podcast – once or twice a year, from 2014 to 2022. If you listen to those shows (and I know the SQL team does!) you’ll hear the shift of how people view SQL Server . Ofen it’s the same attendees, year-to-year, asking progressively more complex questions.
There too you see the gradual embrace of the cloud as your data store – initial skepticism, followed by exploration, ultimately being embraced.
It’s the most rewarding part of creating a conference.
How do you select the speakers and topics for each conference and ensure a diverse and inclusive lineup of speakers and sessions?
Richard: While we do have a Call for Proposals (CFP), it’s not our primary focus for recruiting speakers, because of diversity and inclusion. The CFP process does not appeal to the largest and most diverse group of speakers. We recruit directly – being part of the community, involved with a huge array of different types of events, to directly connect with the best and brightest in the industry. And then we ask them to be part of our show. It’s the best way to get a broad spectrum of people involved in a conference.
Curious for more? Check out videos of keynote sessions, interviews, and highlights from past DEVintersection and next GenAI Conferences.
Microsoft Tech Community – Latest Blogs –Read More
Problem with updateLimitsAndDirection function
Good afternoon,
I’m using MATLAB’s Visual SLAM with RGB-D Camera example (https://uk.mathworks.com/help/vision/ug/visual-slam-with-an-rgbd-camera.html), and I got this error message:
"Error using worldpointset/updateLimitsAndDirection
Invalid location for world point 1. It coincides with one of the views that observes the world point."
When I am exercuting this line of code:
mapPointSet = updateLimitsAndDirection(mapPointSet, rgbdMapPointsIdx, vSetKeyFrames.Views);
In:
% Create an empty imageviewset object to store key frames
vSetKeyFrames = imageviewset;
% Create an empty worldpointset object to store 3-D map points
mapPointSet = worldpointset;
% Add the first key frame
vSetKeyFrames = addView(vSetKeyFrames, currKeyFrameId, initialPose, Points=currPoints,…
Features=currFeatures.Features);
% Add 3-D map points
[mapPointSet, rgbdMapPointsIdx] = addWorldPoints(mapPointSet, xyzPoints);
% Add observations of the map points
mapPointSet = addCorrespondences(mapPointSet, currKeyFrameId, rgbdMapPointsIdx, validIndex);
% Update view direction and depth
mapPointSet = updateLimitsAndDirection(mapPointSet, rgbdMapPointsIdx, vSetKeyFrames.Views);
In the example code, I’ve just modified the download and browse part of the input image sequence.
The code I’ve added is as follows:
% Select the synchronized image data
Path_RGB = ‘C:Documents2.5RGB cameraImage for bad of wordsRGB’;
Path_Grey = ‘C:Documents2.5RGB cameraImage for bad of wordsDepth’;
imdsColor = imageDatastore(Path_RGB);
imdsDepth = imageDatastore(Path_Grey);
% imdsColor = subset(imdsColor, indexPairs(:, 1));
% imdsDepth = subset(imdsDepth, indexPairs(:, 2));
% Inspect the first RGB-D image
currFrameIdx = 1;
currIcolor = readimage(imdsColor, currFrameIdx);
currIdepth = readimage(imdsDepth, currFrameIdx);
currIdepth = rgb2gray(currIdepth);
[rows, colums] = size(currIdepth);
currIcolor = imresize(currIcolor, [rows colums]);
imshowpair(currIcolor, currIdepth, "montage");
I am using MATLAB R2023b, and I’ve done some research on the Internet, but I haven’t found a solution. Can you help me?
Thank you very much.Good afternoon,
I’m using MATLAB’s Visual SLAM with RGB-D Camera example (https://uk.mathworks.com/help/vision/ug/visual-slam-with-an-rgbd-camera.html), and I got this error message:
"Error using worldpointset/updateLimitsAndDirection
Invalid location for world point 1. It coincides with one of the views that observes the world point."
When I am exercuting this line of code:
mapPointSet = updateLimitsAndDirection(mapPointSet, rgbdMapPointsIdx, vSetKeyFrames.Views);
In:
% Create an empty imageviewset object to store key frames
vSetKeyFrames = imageviewset;
% Create an empty worldpointset object to store 3-D map points
mapPointSet = worldpointset;
% Add the first key frame
vSetKeyFrames = addView(vSetKeyFrames, currKeyFrameId, initialPose, Points=currPoints,…
Features=currFeatures.Features);
% Add 3-D map points
[mapPointSet, rgbdMapPointsIdx] = addWorldPoints(mapPointSet, xyzPoints);
% Add observations of the map points
mapPointSet = addCorrespondences(mapPointSet, currKeyFrameId, rgbdMapPointsIdx, validIndex);
% Update view direction and depth
mapPointSet = updateLimitsAndDirection(mapPointSet, rgbdMapPointsIdx, vSetKeyFrames.Views);
In the example code, I’ve just modified the download and browse part of the input image sequence.
The code I’ve added is as follows:
% Select the synchronized image data
Path_RGB = ‘C:Documents2.5RGB cameraImage for bad of wordsRGB’;
Path_Grey = ‘C:Documents2.5RGB cameraImage for bad of wordsDepth’;
imdsColor = imageDatastore(Path_RGB);
imdsDepth = imageDatastore(Path_Grey);
% imdsColor = subset(imdsColor, indexPairs(:, 1));
% imdsDepth = subset(imdsDepth, indexPairs(:, 2));
% Inspect the first RGB-D image
currFrameIdx = 1;
currIcolor = readimage(imdsColor, currFrameIdx);
currIdepth = readimage(imdsDepth, currFrameIdx);
currIdepth = rgb2gray(currIdepth);
[rows, colums] = size(currIdepth);
currIcolor = imresize(currIcolor, [rows colums]);
imshowpair(currIcolor, currIdepth, "montage");
I am using MATLAB R2023b, and I’ve done some research on the Internet, but I haven’t found a solution. Can you help me?
Thank you very much. Good afternoon,
I’m using MATLAB’s Visual SLAM with RGB-D Camera example (https://uk.mathworks.com/help/vision/ug/visual-slam-with-an-rgbd-camera.html), and I got this error message:
"Error using worldpointset/updateLimitsAndDirection
Invalid location for world point 1. It coincides with one of the views that observes the world point."
When I am exercuting this line of code:
mapPointSet = updateLimitsAndDirection(mapPointSet, rgbdMapPointsIdx, vSetKeyFrames.Views);
In:
% Create an empty imageviewset object to store key frames
vSetKeyFrames = imageviewset;
% Create an empty worldpointset object to store 3-D map points
mapPointSet = worldpointset;
% Add the first key frame
vSetKeyFrames = addView(vSetKeyFrames, currKeyFrameId, initialPose, Points=currPoints,…
Features=currFeatures.Features);
% Add 3-D map points
[mapPointSet, rgbdMapPointsIdx] = addWorldPoints(mapPointSet, xyzPoints);
% Add observations of the map points
mapPointSet = addCorrespondences(mapPointSet, currKeyFrameId, rgbdMapPointsIdx, validIndex);
% Update view direction and depth
mapPointSet = updateLimitsAndDirection(mapPointSet, rgbdMapPointsIdx, vSetKeyFrames.Views);
In the example code, I’ve just modified the download and browse part of the input image sequence.
The code I’ve added is as follows:
% Select the synchronized image data
Path_RGB = ‘C:Documents2.5RGB cameraImage for bad of wordsRGB’;
Path_Grey = ‘C:Documents2.5RGB cameraImage for bad of wordsDepth’;
imdsColor = imageDatastore(Path_RGB);
imdsDepth = imageDatastore(Path_Grey);
% imdsColor = subset(imdsColor, indexPairs(:, 1));
% imdsDepth = subset(imdsDepth, indexPairs(:, 2));
% Inspect the first RGB-D image
currFrameIdx = 1;
currIcolor = readimage(imdsColor, currFrameIdx);
currIdepth = readimage(imdsDepth, currFrameIdx);
currIdepth = rgb2gray(currIdepth);
[rows, colums] = size(currIdepth);
currIcolor = imresize(currIcolor, [rows colums]);
imshowpair(currIcolor, currIdepth, "montage");
I am using MATLAB R2023b, and I’ve done some research on the Internet, but I haven’t found a solution. Can you help me?
Thank you very much. rgb-d camera, updatelimitsanddirection, computer vision toolbox MATLAB Answers — New Questions
Cleanup of stale DB entries skipped
Hello,
My Microsoft SQL Server error log contains very often the following line “RemoveStaleDbEntries: Cleanup of stale DB entries skipped because master db is not memory optimized.
Parallel redo is started for database ‘database_y’ with worker pool size [4].
Parallel redo is shutdown for database ‘database_y’ with worker pool size [4].
RBPEX::NotifyFileShutdown: Called for database ID: [5], file Id [0]
Starting up database ‘database_y’. ” where database_y has database ID: [5]
DBCC CHECKDB (‘database_y’)
CHECKDB found 0 allocation errors and 0 consistency errors in database ‘database_y’
Do you have any idea how to continue the investigation?
Thank you.
Best regards,
Matei
Hello,My Microsoft SQL Server error log contains very often the following line “RemoveStaleDbEntries: Cleanup of stale DB entries skipped because master db is not memory optimized.Parallel redo is started for database ‘database_y’ with worker pool size [4].Parallel redo is shutdown for database ‘database_y’ with worker pool size [4].RBPEX::NotifyFileShutdown: Called for database ID: [5], file Id [0]Starting up database ‘database_y’. ” where database_y has database ID: [5] DBCC CHECKDB (‘database_y’)CHECKDB found 0 allocation errors and 0 consistency errors in database ‘database_y’Do you have any idea how to continue the investigation?Thank you.Best regards,Matei Read More
Microsoft 365 Groups Public Roadmap
Is there a public roadmap for Microsoft 365 Groups.
Mostly everything seems straight forward with Microsoft 365 Groups, but I have a large subset of employees who are resistant to change to Microsoft 365 Groups because the groups tab in Outlook Web will not search the files or calendar events in Outlook, will only search mail within the shared mailbox. If there is a public roadmap I was hoping to see if anything will be changing with the product or if its mainly stuck the way it is.
Is there a public roadmap for Microsoft 365 Groups. Mostly everything seems straight forward with Microsoft 365 Groups, but I have a large subset of employees who are resistant to change to Microsoft 365 Groups because the groups tab in Outlook Web will not search the files or calendar events in Outlook, will only search mail within the shared mailbox. If there is a public roadmap I was hoping to see if anything will be changing with the product or if its mainly stuck the way it is. Read More
Optimizing Language Model Inference on Azure
By Shantanu Deepak Patankar, Software Engineer Intern, and Hugo Affaticati, Technical Program Manager 2
Inefficient inference optimization can lead to skyrocketing costs for customers, making it crucial to establish clear performance benchmarking numbers. This blog sets the standard for expected performance, helping customers make informed decisions that maximize efficiency and minimize expenses with the new Azure ND H200 v5-series.
We evaluated the inference performance of the new Azure ND H200 v5-series for Small Language Models (SLMs) and Large Language Models (LLMs). The ND H200 v5-series, powered by eight NVIDIA H200 Tensor Core GPUs, offers a 76% increase in memory bandwidth over the NVIDIA H100 Tensor Core GPU of the ND H100 v5-series. We compared three models: Phi 3 (128k parameters), Mistral v0.1 (7B parameters), and Llama 3.1 (8B, 70B, and 405B parameters) to set performance standards and empower Azure customers to optimize their workloads for time or resources.
Model Architecture
Achieving optimal performance requires a clear understanding of where time is spent during the inference workload, enabling effective optimization. The first critical step is to carefully examine the parameters that directly impact performance. For the models discussed, and more broadly, these key parameters include input sequence length, output sequence length, batch size, and tensor parallelism. In this article, we measured the impact of these variables using two essential metrics: throughput and first token latency.
The inference process can be categorized into three primary components: pure computation phases (e.g., local GEMMs), pure communication phases (e.g., all-reduce), and attention phases. Analyzing the Llama3 8B model on the new ND H200 v5 virtual machine revealed that computation consistently accounts for at least 50% and up to 85% of total inference time. Communication time ranges from 10% to 25%, scaling as the number of GPUs increases from 2 to 8. In contrast, attention mechanisms consistently represent less than 10% of the total time spent, as shown in Table 1. This article aims to guide customers in striking the right balance between computation and communication when selecting their AI inference architecture, based on whether time efficiency or cost-effectiveness is their primary goal.
Tensor Parallelism
Computation (% of time spent)
Communication (% of time spent)
Attention (% of time spent)
1 GPU
83.3
0
9.2
2 GPUs
70.7
10.8
7.4
4 GPUs
56.7
24.7
6.1
8 GPUs
57.2
25.1
8.2
Table 1: Breakdown of time spent per mechanism for LLAMA 3 8B inference on the ND H200 v5 virtual machine, with an input sequence length of 1024, output sequence length of 128, and batch size of 32.
Resource optimization
Since most of the inference time is spent on computation, the GPU computational speed has a tremendous impact on the overall performance. Understanding the memory requirements ensures better GPU usage. The two main factors influencing GPU memory consumption are the model weights and the key-value cache.
Model Weights: the memory occupied by the model weights depends on the number of parameters and the quantization of the model. The memory required can be calculated using the formula:
Memory used (in GB) = number of parameters (in billions) × precision (in bits / 8)
For example, the model weights of a LLAMA 3 model using 8B parameters and FP8 precision, would require 8 GB of memory (8B parameters x 8 / 8 = 8 GB)
Key-Value Cache: since the attention score of each token only depends on the preceding tokens, the model stores the key and value matrices in the cache to avoid recalculating attention values for every token in the sequence, accounted for by the factor 2 in the equation below.
Size of KV cache (in B) = batch size * sequence length * 2 * number of layers * (number of heads * dimension of head) * precision (in bits / 8)
For example, the key-value cache of a LLAMA 3 model using 8B parameters, FP8 precision, input size 1024, and output length 128 would require 0.5 GB of memory for a batch size of 1 (1 x (1024+128) sequence length x 2 x 32 layers x 4096 x 8 / 8 = 0.5 GB)
By using these two quantities, customers can accurately estimate the maximum batch size that the virtual machines can accommodate for their model, thereby optimizing resource utilization. The available GPU memory is calculated by subtracting the weight memory from the total GPU memory when the system is idle. The maximum batch size is then determined by dividing the available memory by the size of the KV cache required for a batch size of one. Table 2 provides several examples of these theoretical batch sizes. This approach not only simplifies the process but also helps customers avoid the trial-and-error method, which can lead to higher GPU consumption and increased costs.
Model
ND H200 v5 memory per GPU (in GB)
Number of parameters(in billions)
Weight Memory(in GB)
Available memory(in GB)
KV Cache size (in GB)
Max Batch size
LLAMA 3
140
8
16
124
0.60
206
Mistral
140
7
14
126
0.60
210
Phi-3 medium
140
14
28
115.8
0.94
123
Table 2: Theoretical maximum batch size for inference with various languageinference models (LLAMA 3 8B, Mistral, Phi-3 medium) on the ND H200 v5 virtual machine with sequence length 1152 and FP8.
Very similar results have been obtained empirically to confirm the theoretical limits. Figure 1 below highlights the maximum batch size to maximize the usage of one NVIDIA H200 Tensor Core GPU, then combined to up to the eight other GPUs of the latest ND H200 v5 virtual machine, with the corresponding throughput. By optimizing the batch size, customers can extract extra performance from each GPU, fully utilizing available resources. This ensures that every virtual machine operates at its peak capacity, maximizing performance while minimizing cost.
Figure 1: Experimental maximum batch size as a function of tensor parallelism (TP) for inference with LLAMA 3 8B on the ND H200 v5 virtual machine with sequence total length 1152.
Time optimization
For some specific workloads, time is more of the essence. While increasing the batch size can enhance throughput and maximize resource utilization, it also leads to higher latency. By measuring both latency and throughput of the inference workload, the optimal balance can be determined. For instance, when running models like Llama 3 and Mistral on a single GPU of the latest ND H200 v5 virtual machine, a batch size of 32 delivers the highest throughput-to-latency ratio, as shown in Figure 2. The optimum batch size is specific to the customer’s workload, as highlighted by the Phi-3 model, which achieves its highest ratio at a batch size of 64 with a single GPU. When scaling to two GPUs, the optimal batch size increases to 64, as illustrated in Figure 3. Although this approach may not fully utilize the available memory, it achieves the lowest possible latency for inference, making it ideal for time-sensitive applications.
Figure 2: Experimental optimal throughput to latency balance as a function of batch for inference with LLAMA 3, Phi-3 and Mistral on a single GPU of the ND H200 v5 virtual machine with sequence total length 1152, FP8, and TP 1.
Figure 3: Experimental optimal throughput to latency balance as a function of batch for inference with LLAMA 3, Phi-3 and Mistral on two GPUs of the ND H200 v5 virtual machine with sequence total length 1152, FP8, and TP 2.
Microsoft Tech Community – Latest Blogs –Read More
Level Up Your Security Skills with the New Microsoft Sentinel Ninja Training!
If you’ve explored our Microsoft Sentinel Ninja Training in the past, it’s time to revisit!
Our training program has undergone some exciting changes to keep you ahead of the curve in the ever-evolving cybersecurity landscape.
Microsoft Sentinel is a cutting-edge, cloud-native SIEM and SOAR solution designed to help security professionals protect their organizations from today’s complex threats.
Our Ninja Training program is here to guide you through every aspect of this powerful tool.
So, what’s new? In addition to the structured security roles format, the Ninja Training now offers a more interactive experience with updated modules, hands-on labs, and real-world scenarios. Whether you’re focusing on threat detection, incident response, or automation, the training ensures you gain the practical skills needed to optimize your security operations.
One of the biggest updates is the integration of Sentinel into the Defender XDR portal, creating a unified security platform. This merger simplifies workflows, speeds up incident response, and minimizes tool-switching, allowing for seamless operations.
Other highlights include:
Step-by-step guidance through the official Microsoft Sentinel documentation.
Exclusive webinars and up-to-date blog posts from Microsoft experts.
If you’re ready to take your Sentinel skills to the next level or want to revisit the program’s new features, head over to the blog now and dive into the refreshed Microsoft Sentinel Ninja Training!
Don’t miss out—your next cybersecurity breakthrough is just a click away!
Microsoft Tech Community – Latest Blogs –Read More
Microsoft 365 Copilot GCC Readiness Days: Unlock AI-Powered Productivity for Public Sector Agencies
Join us in Reston, VA, on October 15th, 16th, or 17th for our Microsoft 365 Copilot GCC Readiness Days!
This exclusive in-person event is your chance to learn how AI and Microsoft 365 Copilot GCC can address the unique challenges of public sector missions. Tailored for IT professionals, administrators, and decision-makers in government agencies, these readiness days offer practical, actionable insights to help you drive secure productivity, efficiency, and innovation in your organization.
Reserve Your Spot Today!
Spaces are limited for these in-person readiness days to ensure personalized, in-depth discussions. Register now to secure your spot and be among the first to explore how Microsoft 365 Copilot GCC can empower your teams to work more effectively.
Register for October 17th
Register for October 16th
Register for October 15th
Why You Should Attend
Microsoft 365 Copilot GCC is set to revolutionize government workspaces, bringing AI-powered productivity into secure, compliant environments. At this event, you will uncover how Copilot can automate routine tasks, enhance collaboration, and help your team focus on higher-priority activities—all while meeting the stringent regulations of government operations.
What You’ll Gain from the Event:
In-Depth, Actionable Learning: Sessions will provide detailed, step-by-step guides on how to implement Copilot GCC within your agency’s existing workflows.
Transparency and Trust: Gain insights from candid discussions about the strengths and limitations of AI in government settings, addressing your concerns around compliance, security, and data privacy.
Real-World Solutions: Learn directly from AI thought leaders and engineers on how to navigate integration challenges and leverage responsible AI practices.
What to Expect:
Prepare to Become an AI-Powered Public Sector Organization: Learn how to ready your workforce for a seamless transition to Microsoft 365 Copilot GCC, with practical tips tailored to government environments.
Responsible AI: Discover how Microsoft partners with government agencies to implement AI responsibly, ensuring compliance, security, and ethical considerations.
Gov AI Integration Challenges and Opportunities: Engage in transparent discussions with product experts and engineers to explore best practices for deploying Copilot GCC in your agency’s environment.
Event Agenda:
09:00 AM – 09:15 AM: Keynote
09:15 AM – 10:00 AM: Roadmap
10:00 AM – 10:50 AM: How Copilot Works
11:00 AM – 11:50 AM: How to Get Ready: Technology
12:00 PM – 01:00 PM: Lunch
01:00 PM – 01:50 PM: Microsoft 365 Business Chat Conversation
02:00 PM – 02:50 PM: How to Get Ready: People
03:00 PM – 03:50 PM: Art of the Possible Demos
Note: The agenda is identical for each day, so select the date that best fits your schedule.
Is Lunch Provided?
Yes! Lunch is provided each day, thanks to our spectacular sponsoring partners:
October 15: Planet Technologies
October 16: Dell Technologies | Federal
October 17: Carahsoft
Be sure to learn more about our sponsoring partners’ Copilot offerings during the Partner Lunch & Learn session!
Venue Details:
Where: The Microsoft Garage, Reston – DC, 11955 Freedom Drive, Reston, VA 20190
Engage and Share!
Have questions or want to share your thoughts on Microsoft 365 Copilot in GCC? Join the conversation in the comments below and connect with your peers!
Microsoft Tech Community – Latest Blogs –Read More
Problems with legend in R2016b
Why can I no longer add a plot with a legend in R2016b same way it used to be in previews versions?
During the migration to R2016b or R2017a, the script show some problems with legend function in the plot.
An example is a simple line after a plot:
legend(‘Curve1′,’Curve2’,1);
While it used to work fine prior to R2016b, we see the following response :
Error using legend>process_inputs (line 582)
Invalid argument. Type ‘help legend’ for more information.
Error in legend>make_legend (line 340)
[autoupdate,orient,location,position,children,listen,strings,propargs] = process_inputs(ha,argin); %#ok
Error in legend (line 294)
make_legend(ha,args(arg:end),version);
Error in Script_plot (line 274)
legend(‘Curve1′,’Curve2’,1);Why can I no longer add a plot with a legend in R2016b same way it used to be in previews versions?
During the migration to R2016b or R2017a, the script show some problems with legend function in the plot.
An example is a simple line after a plot:
legend(‘Curve1′,’Curve2’,1);
While it used to work fine prior to R2016b, we see the following response :
Error using legend>process_inputs (line 582)
Invalid argument. Type ‘help legend’ for more information.
Error in legend>make_legend (line 340)
[autoupdate,orient,location,position,children,listen,strings,propargs] = process_inputs(ha,argin); %#ok
Error in legend (line 294)
make_legend(ha,args(arg:end),version);
Error in Script_plot (line 274)
legend(‘Curve1′,’Curve2’,1); Why can I no longer add a plot with a legend in R2016b same way it used to be in previews versions?
During the migration to R2016b or R2017a, the script show some problems with legend function in the plot.
An example is a simple line after a plot:
legend(‘Curve1′,’Curve2’,1);
While it used to work fine prior to R2016b, we see the following response :
Error using legend>process_inputs (line 582)
Invalid argument. Type ‘help legend’ for more information.
Error in legend>make_legend (line 340)
[autoupdate,orient,location,position,children,listen,strings,propargs] = process_inputs(ha,argin); %#ok
Error in legend (line 294)
make_legend(ha,args(arg:end),version);
Error in Script_plot (line 274)
legend(‘Curve1′,’Curve2’,1); legend, plot MATLAB Answers — New Questions
How to perform matrix math
Hello all, I currently have a code to look at delta distances in a .csv file and ouput a new csv files with these a summation of these distances for each of the three columns. This code looks at three columns in a .csv files, subtracts the second row from the first row and continues this calculation down all the rows. I have had the chance to look at some of this distance data and it looks good but I need to tweak my calculation. Where I have S = sum(abs(diff(M(:,2:4),1,1)),1).
I would like to continue this method of subtracting the second line from the first and moving down. But I need an intermediate step (or 2) where the values are subtracted values are squared. The that row of values would be added up and the sqrt of that would be taken.That end value would be added to the end values of that operation performed down the rows.
I have attached an image that explains this clearer (I hope). Its a way of doing the distance formula. I just need to loop it for 500 x y and z coordinates. I will also attach an example file of the data in .csv form. Thank you, any help is much appreciated!
clc
% appropriate dir() call that returns info
% about the files you want to process:
fn = dir(‘C:UserslucasarsenithDesktopData/*.csv’); % this call returns info about .csv files in the current directory;
% you may need to modify it to work for your file locations
% (see dir documentation)
% number of files:
N_files = numel(fn);
% pre-allocate results matrix (one row per file, 3 columns):
results = zeros(N_files,3);
% read and process each file:
for ii = 1:N_files
% read the file starting from line 10:
M = readmatrix(fullfile(fn(ii).folder,fn(ii).name));
% process columns 2-4 of the file’s data:
S = sum(abs(diff(M(:,2:4),1,1)),1);
% store the result:
results(ii,:) = S;
end
% write the results file (can be located anywhere):
writematrix(results,’C:UserslucasarsenithDesktopPlot.csv’)Hello all, I currently have a code to look at delta distances in a .csv file and ouput a new csv files with these a summation of these distances for each of the three columns. This code looks at three columns in a .csv files, subtracts the second row from the first row and continues this calculation down all the rows. I have had the chance to look at some of this distance data and it looks good but I need to tweak my calculation. Where I have S = sum(abs(diff(M(:,2:4),1,1)),1).
I would like to continue this method of subtracting the second line from the first and moving down. But I need an intermediate step (or 2) where the values are subtracted values are squared. The that row of values would be added up and the sqrt of that would be taken.That end value would be added to the end values of that operation performed down the rows.
I have attached an image that explains this clearer (I hope). Its a way of doing the distance formula. I just need to loop it for 500 x y and z coordinates. I will also attach an example file of the data in .csv form. Thank you, any help is much appreciated!
clc
% appropriate dir() call that returns info
% about the files you want to process:
fn = dir(‘C:UserslucasarsenithDesktopData/*.csv’); % this call returns info about .csv files in the current directory;
% you may need to modify it to work for your file locations
% (see dir documentation)
% number of files:
N_files = numel(fn);
% pre-allocate results matrix (one row per file, 3 columns):
results = zeros(N_files,3);
% read and process each file:
for ii = 1:N_files
% read the file starting from line 10:
M = readmatrix(fullfile(fn(ii).folder,fn(ii).name));
% process columns 2-4 of the file’s data:
S = sum(abs(diff(M(:,2:4),1,1)),1);
% store the result:
results(ii,:) = S;
end
% write the results file (can be located anywhere):
writematrix(results,’C:UserslucasarsenithDesktopPlot.csv’) Hello all, I currently have a code to look at delta distances in a .csv file and ouput a new csv files with these a summation of these distances for each of the three columns. This code looks at three columns in a .csv files, subtracts the second row from the first row and continues this calculation down all the rows. I have had the chance to look at some of this distance data and it looks good but I need to tweak my calculation. Where I have S = sum(abs(diff(M(:,2:4),1,1)),1).
I would like to continue this method of subtracting the second line from the first and moving down. But I need an intermediate step (or 2) where the values are subtracted values are squared. The that row of values would be added up and the sqrt of that would be taken.That end value would be added to the end values of that operation performed down the rows.
I have attached an image that explains this clearer (I hope). Its a way of doing the distance formula. I just need to loop it for 500 x y and z coordinates. I will also attach an example file of the data in .csv form. Thank you, any help is much appreciated!
clc
% appropriate dir() call that returns info
% about the files you want to process:
fn = dir(‘C:UserslucasarsenithDesktopData/*.csv’); % this call returns info about .csv files in the current directory;
% you may need to modify it to work for your file locations
% (see dir documentation)
% number of files:
N_files = numel(fn);
% pre-allocate results matrix (one row per file, 3 columns):
results = zeros(N_files,3);
% read and process each file:
for ii = 1:N_files
% read the file starting from line 10:
M = readmatrix(fullfile(fn(ii).folder,fn(ii).name));
% process columns 2-4 of the file’s data:
S = sum(abs(diff(M(:,2:4),1,1)),1);
% store the result:
results(ii,:) = S;
end
% write the results file (can be located anywhere):
writematrix(results,’C:UserslucasarsenithDesktopPlot.csv’) matrix, matrices, mathematics MATLAB Answers — New Questions