Category: News
Matlab App: Get cursor position in axis continuously, keep plot interactivity
Hi there,
I’m trying to create a matlab app in which users can create geometric objects and use them for some calculations later.
These are displayed in a 3D axis, rotated slightly. I am trying to create user functions to draw lines inside that axis. For this, I wan to implement continuously displayed x,y coordinates, or crosshairs or similar, as well as data snapping. Users will only draw 2D objects on the x,y plane. I still want users to be able to rotate the plot, zoom and pan normally with the mouse.
If I use a WindowButtonMotionFcn, then I lose plot interactivity with the mouse. If I use other means of getting the cursor position, like java.awt.MouseInfo.getPointerInfo().getLocation() and getpixelposition() or similar gui coordinate functions, I have to deal with a rotated camera and axis transformation. This makes "hit detection" on the data very cumbersome.
Using axes.CurrentPoint seems useless, because it only updates when the mouse is clicked, not continuously.
Any ideas on how to do this?Hi there,
I’m trying to create a matlab app in which users can create geometric objects and use them for some calculations later.
These are displayed in a 3D axis, rotated slightly. I am trying to create user functions to draw lines inside that axis. For this, I wan to implement continuously displayed x,y coordinates, or crosshairs or similar, as well as data snapping. Users will only draw 2D objects on the x,y plane. I still want users to be able to rotate the plot, zoom and pan normally with the mouse.
If I use a WindowButtonMotionFcn, then I lose plot interactivity with the mouse. If I use other means of getting the cursor position, like java.awt.MouseInfo.getPointerInfo().getLocation() and getpixelposition() or similar gui coordinate functions, I have to deal with a rotated camera and axis transformation. This makes "hit detection" on the data very cumbersome.
Using axes.CurrentPoint seems useless, because it only updates when the mouse is clicked, not continuously.
Any ideas on how to do this? Hi there,
I’m trying to create a matlab app in which users can create geometric objects and use them for some calculations later.
These are displayed in a 3D axis, rotated slightly. I am trying to create user functions to draw lines inside that axis. For this, I wan to implement continuously displayed x,y coordinates, or crosshairs or similar, as well as data snapping. Users will only draw 2D objects on the x,y plane. I still want users to be able to rotate the plot, zoom and pan normally with the mouse.
If I use a WindowButtonMotionFcn, then I lose plot interactivity with the mouse. If I use other means of getting the cursor position, like java.awt.MouseInfo.getPointerInfo().getLocation() and getpixelposition() or similar gui coordinate functions, I have to deal with a rotated camera and axis transformation. This makes "hit detection" on the data very cumbersome.
Using axes.CurrentPoint seems useless, because it only updates when the mouse is clicked, not continuously.
Any ideas on how to do this? appdesigner, app designer, windowbuttonmotionfcn, axes, matlab gui MATLAB Answers — New Questions
Got error in resample function
I am a beginner of Matlab and I was trying to resample y by this resample function
but i got an error: " Incorrect number or types of inputs or outputs for function resample. "
please tell me what is the problem if you know. thanks a lot.
load handel.mat
y = y(:);
Fs = 8192;
fc = 2e5;
Fs_new = ceil( (Fs/2 + fc) / Fs * 2 ) * Fs;
y_resampled = resample(y,Fs_new,Fs);I am a beginner of Matlab and I was trying to resample y by this resample function
but i got an error: " Incorrect number or types of inputs or outputs for function resample. "
please tell me what is the problem if you know. thanks a lot.
load handel.mat
y = y(:);
Fs = 8192;
fc = 2e5;
Fs_new = ceil( (Fs/2 + fc) / Fs * 2 ) * Fs;
y_resampled = resample(y,Fs_new,Fs); I am a beginner of Matlab and I was trying to resample y by this resample function
but i got an error: " Incorrect number or types of inputs or outputs for function resample. "
please tell me what is the problem if you know. thanks a lot.
load handel.mat
y = y(:);
Fs = 8192;
fc = 2e5;
Fs_new = ceil( (Fs/2 + fc) / Fs * 2 ) * Fs;
y_resampled = resample(y,Fs_new,Fs); resample, matlab, error MATLAB Answers — New Questions
How to generate custom unoptimized code within a system composer software architecture model?
C-function code generation appears to work differently when generating code inside of a system composer software architecture model isntead of a standalone simulink model. I am trying to insert certain processer primitaves (mutexes) into my simulink model such that it is incorporated into the autogenerated code. It is a few simple lines of c++ code, so I opted to use the C-function block. In both cases, I have specified in the c-function dialogue window "Generate code as-is (optimizations off)"
Inside of an export-function simulink model, I created a c-function block that includes a comment and then a simple line of code, which generates correctly:
However, if I copy this exact same c-function into an export-function model within a system composer software architecture model, it no longer generates my my specified code. Instead, it generates function calls but does not define the functions anywhere. (Even doing a simple text search of my entire workspace only shows that the function call shown below gets generated).C-function code generation appears to work differently when generating code inside of a system composer software architecture model isntead of a standalone simulink model. I am trying to insert certain processer primitaves (mutexes) into my simulink model such that it is incorporated into the autogenerated code. It is a few simple lines of c++ code, so I opted to use the C-function block. In both cases, I have specified in the c-function dialogue window "Generate code as-is (optimizations off)"
Inside of an export-function simulink model, I created a c-function block that includes a comment and then a simple line of code, which generates correctly:
However, if I copy this exact same c-function into an export-function model within a system composer software architecture model, it no longer generates my my specified code. Instead, it generates function calls but does not define the functions anywhere. (Even doing a simple text search of my entire workspace only shows that the function call shown below gets generated). C-function code generation appears to work differently when generating code inside of a system composer software architecture model isntead of a standalone simulink model. I am trying to insert certain processer primitaves (mutexes) into my simulink model such that it is incorporated into the autogenerated code. It is a few simple lines of c++ code, so I opted to use the C-function block. In both cases, I have specified in the c-function dialogue window "Generate code as-is (optimizations off)"
Inside of an export-function simulink model, I created a c-function block that includes a comment and then a simple line of code, which generates correctly:
However, if I copy this exact same c-function into an export-function model within a system composer software architecture model, it no longer generates my my specified code. Instead, it generates function calls but does not define the functions anywhere. (Even doing a simple text search of my entire workspace only shows that the function call shown below gets generated). software architecture models, c-functions, export-function models, embedded coder, system composer MATLAB Answers — New Questions
When using a writetable to write a table file containing data in datetime format to Excel, the saved datetime data in Excel is not in the same format as the datetime data
The table data is as follows:
Use the following writetable function code to save data to Excel,the exported result is shown in the following figure:
The time display is incomplete.
How to set the exported time format to be the same as the original data?
writetable(T, filePath);The table data is as follows:
Use the following writetable function code to save data to Excel,the exported result is shown in the following figure:
The time display is incomplete.
How to set the exported time format to be the same as the original data?
writetable(T, filePath); The table data is as follows:
Use the following writetable function code to save data to Excel,the exported result is shown in the following figure:
The time display is incomplete.
How to set the exported time format to be the same as the original data?
writetable(T, filePath); writetable,datetime,excel MATLAB Answers — New Questions
Plot along line from. PDE solution
The code below solves a 1-D, transient heat transfer problem set up as in general PDE format. The solution is plotted in color across the domain from 0 to 0.1 after 10 seconds have elapsed. What is the best way to plot the temperature across the length of this domain at this final time?
Thanks
clear all;
%% Create transient thermal model
thermalmodel = createpde(1);
R1= [3,4,0,0.1,0.1,0,0,0,1,1]’;
gd= [R1];
sf= ‘R1’;
ns = char(‘R1’);
ns = ns’;
dl = decsg(gd,sf,ns);
%% Create & plot geometry
geometryFromEdges(thermalmodel,dl);
pdegplot(thermalmodel,"EdgeLabels","on","FaceLabels","on")
xlim([0 0.1])
ylim([-1 1])
% axis equal
%% Generate and plot mesh
generateMesh(thermalmodel)
figure
pdemesh(thermalmodel)
title("Mesh with Quadratic Triangular Elements")
%% Apply BCs
% Edge 4 is left edge; Edge 2 is right side
applyBoundaryCondition(thermalmodel, "dirichlet",Edge=[4],u=100);
applyBoundaryCondition(thermalmodel, "dirichlet",Edge=[2],u=20);
%% Apply thermal properties [copper]
rho= 8933 %
cp= 385 %
rhocp= rho*cp %
k= 401 % W/mK
%% Define uniform volumetric heat generation rate
Qgen= 0 % W/m3
%% Define coefficients for generic Governing Equation to be solved
m= 0
a= 0
d= rhocp
c= [k]
f= [Qgen]
specifyCoefficients(thermalmodel, m=0, d=rhocp, c=k, a=0, f=f);
%% Apply initial condition
setInitialConditions(thermalmodel, 20);
%% Define time limits
tlist= 0: 1: 10;
thermalresults= solvepde(thermalmodel, tlist);
% Plot results
sol = thermalresults.NodalSolution;
subplot(2,2,1)
pdeplot(thermalmodel,"XYData",sol(:,11), …
"Contour","on",…
"ColorMap","jet")The code below solves a 1-D, transient heat transfer problem set up as in general PDE format. The solution is plotted in color across the domain from 0 to 0.1 after 10 seconds have elapsed. What is the best way to plot the temperature across the length of this domain at this final time?
Thanks
clear all;
%% Create transient thermal model
thermalmodel = createpde(1);
R1= [3,4,0,0.1,0.1,0,0,0,1,1]’;
gd= [R1];
sf= ‘R1’;
ns = char(‘R1’);
ns = ns’;
dl = decsg(gd,sf,ns);
%% Create & plot geometry
geometryFromEdges(thermalmodel,dl);
pdegplot(thermalmodel,"EdgeLabels","on","FaceLabels","on")
xlim([0 0.1])
ylim([-1 1])
% axis equal
%% Generate and plot mesh
generateMesh(thermalmodel)
figure
pdemesh(thermalmodel)
title("Mesh with Quadratic Triangular Elements")
%% Apply BCs
% Edge 4 is left edge; Edge 2 is right side
applyBoundaryCondition(thermalmodel, "dirichlet",Edge=[4],u=100);
applyBoundaryCondition(thermalmodel, "dirichlet",Edge=[2],u=20);
%% Apply thermal properties [copper]
rho= 8933 %
cp= 385 %
rhocp= rho*cp %
k= 401 % W/mK
%% Define uniform volumetric heat generation rate
Qgen= 0 % W/m3
%% Define coefficients for generic Governing Equation to be solved
m= 0
a= 0
d= rhocp
c= [k]
f= [Qgen]
specifyCoefficients(thermalmodel, m=0, d=rhocp, c=k, a=0, f=f);
%% Apply initial condition
setInitialConditions(thermalmodel, 20);
%% Define time limits
tlist= 0: 1: 10;
thermalresults= solvepde(thermalmodel, tlist);
% Plot results
sol = thermalresults.NodalSolution;
subplot(2,2,1)
pdeplot(thermalmodel,"XYData",sol(:,11), …
"Contour","on",…
"ColorMap","jet") The code below solves a 1-D, transient heat transfer problem set up as in general PDE format. The solution is plotted in color across the domain from 0 to 0.1 after 10 seconds have elapsed. What is the best way to plot the temperature across the length of this domain at this final time?
Thanks
clear all;
%% Create transient thermal model
thermalmodel = createpde(1);
R1= [3,4,0,0.1,0.1,0,0,0,1,1]’;
gd= [R1];
sf= ‘R1’;
ns = char(‘R1’);
ns = ns’;
dl = decsg(gd,sf,ns);
%% Create & plot geometry
geometryFromEdges(thermalmodel,dl);
pdegplot(thermalmodel,"EdgeLabels","on","FaceLabels","on")
xlim([0 0.1])
ylim([-1 1])
% axis equal
%% Generate and plot mesh
generateMesh(thermalmodel)
figure
pdemesh(thermalmodel)
title("Mesh with Quadratic Triangular Elements")
%% Apply BCs
% Edge 4 is left edge; Edge 2 is right side
applyBoundaryCondition(thermalmodel, "dirichlet",Edge=[4],u=100);
applyBoundaryCondition(thermalmodel, "dirichlet",Edge=[2],u=20);
%% Apply thermal properties [copper]
rho= 8933 %
cp= 385 %
rhocp= rho*cp %
k= 401 % W/mK
%% Define uniform volumetric heat generation rate
Qgen= 0 % W/m3
%% Define coefficients for generic Governing Equation to be solved
m= 0
a= 0
d= rhocp
c= [k]
f= [Qgen]
specifyCoefficients(thermalmodel, m=0, d=rhocp, c=k, a=0, f=f);
%% Apply initial condition
setInitialConditions(thermalmodel, 20);
%% Define time limits
tlist= 0: 1: 10;
thermalresults= solvepde(thermalmodel, tlist);
% Plot results
sol = thermalresults.NodalSolution;
subplot(2,2,1)
pdeplot(thermalmodel,"XYData",sol(:,11), …
"Contour","on",…
"ColorMap","jet") plotting MATLAB Answers — New Questions
Use Auto-Label Policies to Protect Old Files from Copilot
Combining Auto-Label Policies, Trainable Classifiers, Sensitivity Labels, and DLP to Stop Copilot Accessing Old But Still Confidential Files
I’ve been on the TEC 2025 Roadshow in Europe this week. Monday was London, Tuesday Paris, and Dusseldorf is the final stop on Thursday. These trips sound like they should be great fun, but running events in three major cities over four days takes a brutal amount of effort.
In any case, my topic this week is protecting Microsoft 365 data in the era of AI. During the talk, I recommend that people use features like Restricted Content Discovery, sensitivity labels, and the (preview) DLP policy for Copilot to exert control over confidential and sensitive documents and restrict access to Copilot for Microsoft 365 and Copilot agents.
Find and Protect Old Confidential Material
All of which led to a great question at the London event: “how do I apply sensitivity labels to thousands of old but still confidential material files stored in multiple SharePoint Online sites.” It’s a good example of the kind of practical issue faced by tenant administrators during deployments.
The obvious answer is to use an auto-label policy to apply sensitivity labels that are then blocked by the DLP policy for Copilot. An auto-label policy can find Office documents at rest that don’t have sensitivity labels and apply a chosen label (manually-applied sensitivity labels are never overwritten but a policy can overwrite a lower-priority sensitivity label.
Trainable Classifiers
The issue is to identify the target set of confidential files. This is where a trainable classifier can help. Purview Data Lifecycle Management includes 75-odd built-in trainable classifiers that Microsoft has taught to find different types of documents like business plans and credit reports.
It might be possible to identify old confidential material using a built-in trainable classifier. If not, tenants can create custom trainable classifiers by using machine learning to process a training set of documents unique to the business. The process isn’t difficult, and the hardest part is often to find a suitable set of sample documents to train the classifier with. Running a simulation will quickly tell if machine learning can extract an accurate digital structure from sample documents to use as a classifier.
I have a couple of trainable classifiers in use to auto-label files. To test the process, I selected the default Source Code classifier (Figure 1). Behind the scenes, Purview looks for some matching documents to demonstrate how each of the built-in classifiers work. In this case, Purview had found several items in a projects site where I store files like drafts for blog posts. Some of the matching items had sensitivity labels, others did not. It was a good set to test the theory against.

Creating an Auto-Label Policy
The next step is to create an auto-label policy. Because we want to apply sensitivity labels, the policy is created in the Purview Information Protection solution. The policy settings are very straightforward. Look for files matching the source code trainable classifier in all SharePoint Online sites and apply the Confidential sensitivity label. Figure 2 shows the rule created to find files that match the trainable qualifier.

You can choose to run an auto-label policy in simulation mode before making it active. Even though the trainable classifier shows some sample files that it found, it’s still a good idea to run the simulation, just to be sure. When you’re happy with the results, you can activate the policy to have Purview assign the chosen sensitivity label to the files found by the policy. Once the files are labelled, they’ll be invisible to Copilot for Microsoft 365.
Background Processing Runs Until the Job’s Done
Depending on how many old files need to be protected, the entire process to create a trainable classifier, tweak the classifier until it’s accurate, and run auto-labeling might take several weeks to complete. Most of the work happens in the background at a pace dictated by demands on the service. The auto-label policy will continue to run unless you stop it, once all those old but still valuable files are labelled.
Learn how to exploit the data available to Microsoft 365 tenant administrators through the Office 365 for IT Pros eBook. We love figuring out how things work.
Aqua Security Achieves FedRAMP® High Authorization
Aqua Security’s Cloud Native Application Protection Platform (CNAPP) has achieved FedRAMP® High Impact Authorization, making Aqua one of the few CNAPP providers authorized at the highest level of federal cloud security compliance. This milestone opens the door for U.S. federal agencies, commercial organizations that require FedRAMP High, and cloud service providers operating in FedRAMP-authorized environments to confidently use Aqua’s platform for securing their cloud native applications.
Aqua Security’s Cloud Native Application Protection Platform (CNAPP) has achieved FedRAMP® High Impact Authorization, making Aqua one of the few CNAPP providers authorized at the highest level of federal cloud security compliance. This milestone opens the door for U.S. federal agencies, commercial organizations that require FedRAMP High, and cloud service providers operating in FedRAMP-authorized environments to confidently use Aqua’s platform for securing their cloud native applications.
Read More
How to use cell value as a variable or How to convert cell to double?
I have data in excel, I am reading that data in MATLAB. I want to use text in excel as a variable name to assign fixed set of data values which also in same excel. but I am facing the problem as the text from excel has class cell.
So, how can I use cell values (Time, Temperatue….etc) as a variable in MATLAB?I have data in excel, I am reading that data in MATLAB. I want to use text in excel as a variable name to assign fixed set of data values which also in same excel. but I am facing the problem as the text from excel has class cell.
So, how can I use cell values (Time, Temperatue….etc) as a variable in MATLAB? I have data in excel, I am reading that data in MATLAB. I want to use text in excel as a variable name to assign fixed set of data values which also in same excel. but I am facing the problem as the text from excel has class cell.
So, how can I use cell values (Time, Temperatue….etc) as a variable in MATLAB? cell, variable, evil, eval, antipattern, anti-pattern MATLAB Answers — New Questions
Code generation and stateflow transition variant
Hello,
Is the code generation tunable with a transition variant, set with "Variant Activation Time = Code compile" with r2024b?
Indeed, I have a compilation warning with my generated code. The code generation creates a static function with #if inside the function:
This way, the compilation gives a compilation warning saying that there is a function defined but not used.
If you have a solution, I am interested.
Regards,
JeanHello,
Is the code generation tunable with a transition variant, set with "Variant Activation Time = Code compile" with r2024b?
Indeed, I have a compilation warning with my generated code. The code generation creates a static function with #if inside the function:
This way, the compilation gives a compilation warning saying that there is a function defined but not used.
If you have a solution, I am interested.
Regards,
Jean Hello,
Is the code generation tunable with a transition variant, set with "Variant Activation Time = Code compile" with r2024b?
Indeed, I have a compilation warning with my generated code. The code generation creates a static function with #if inside the function:
This way, the compilation gives a compilation warning saying that there is a function defined but not used.
If you have a solution, I am interested.
Regards,
Jean variant, code generation MATLAB Answers — New Questions
Discrepancy in Peak Temperature Between Simulation and Experiment in 4s3p NCA Battery Pack
Hi everyone,
I’m currently working on the simulation of a 4s3p battery pack using Molicel INR-21700-P45B cells (NCA chemistry). The pack undergoes full charge and discharge cycles (from 3.0 V to 4.2 V per cell) under natural convection conditions, for 10 cycles at both 1C and 1.5C rates.
The charge protocol includes CCCV charging, followed by a 30-minute rest, then CC discharging, and another 30-minute rest. In the simulation, the peak temperature during discharging is higher than during charging. However, in our experimental results, the opposite is observed—the peak temperature is higher during charging.
For the simulation, we used the OCV curve of a single cell multiplied by 4 for the 4s configuration.
Has anyone encountered a similar issue or could provide insights into why the simulated thermal behavior might differ from experimental results?Hi everyone,
I’m currently working on the simulation of a 4s3p battery pack using Molicel INR-21700-P45B cells (NCA chemistry). The pack undergoes full charge and discharge cycles (from 3.0 V to 4.2 V per cell) under natural convection conditions, for 10 cycles at both 1C and 1.5C rates.
The charge protocol includes CCCV charging, followed by a 30-minute rest, then CC discharging, and another 30-minute rest. In the simulation, the peak temperature during discharging is higher than during charging. However, in our experimental results, the opposite is observed—the peak temperature is higher during charging.
For the simulation, we used the OCV curve of a single cell multiplied by 4 for the 4s configuration.
Has anyone encountered a similar issue or could provide insights into why the simulated thermal behavior might differ from experimental results? Hi everyone,
I’m currently working on the simulation of a 4s3p battery pack using Molicel INR-21700-P45B cells (NCA chemistry). The pack undergoes full charge and discharge cycles (from 3.0 V to 4.2 V per cell) under natural convection conditions, for 10 cycles at both 1C and 1.5C rates.
The charge protocol includes CCCV charging, followed by a 30-minute rest, then CC discharging, and another 30-minute rest. In the simulation, the peak temperature during discharging is higher than during charging. However, in our experimental results, the opposite is observed—the peak temperature is higher during charging.
For the simulation, we used the OCV curve of a single cell multiplied by 4 for the 4s configuration.
Has anyone encountered a similar issue or could provide insights into why the simulated thermal behavior might differ from experimental results? simscape, simulation MATLAB Answers — New Questions
errors when generating certain motions of Revolute Joint in simMechanics
The model:
<</matlabcentral/answers/uploaded_files/27438/3.PNG>>
The simMechanics system is as follow:
<</matlabcentral/answers/uploaded_files/27436/1.PNG>>
simin is input from workspace and is the motion of Revolute Joint as the time goes on.
<</matlabcentral/answers/uploaded_files/27437/2.PNG>>
But it shows some kind of error:
In the dynamically coupled component containing Revolute Joint Revolute_Joint, there are fewer joint primitive degrees of freedom with automatically computed force or torque (0) than with motion from inputs (1). The prescribed motion trajectories in this component may not be achievable. Solve this problem by reducing the number of joint primitives with motion from inputs or increasing the number of joint primitives with automatically computed force or torque. Resolve this issue in order to simulate the model.
Does anybody knows what’s wrong and what should I do to solve the problem.The model:
<</matlabcentral/answers/uploaded_files/27438/3.PNG>>
The simMechanics system is as follow:
<</matlabcentral/answers/uploaded_files/27436/1.PNG>>
simin is input from workspace and is the motion of Revolute Joint as the time goes on.
<</matlabcentral/answers/uploaded_files/27437/2.PNG>>
But it shows some kind of error:
In the dynamically coupled component containing Revolute Joint Revolute_Joint, there are fewer joint primitive degrees of freedom with automatically computed force or torque (0) than with motion from inputs (1). The prescribed motion trajectories in this component may not be achievable. Solve this problem by reducing the number of joint primitives with motion from inputs or increasing the number of joint primitives with automatically computed force or torque. Resolve this issue in order to simulate the model.
Does anybody knows what’s wrong and what should I do to solve the problem. The model:
<</matlabcentral/answers/uploaded_files/27438/3.PNG>>
The simMechanics system is as follow:
<</matlabcentral/answers/uploaded_files/27436/1.PNG>>
simin is input from workspace and is the motion of Revolute Joint as the time goes on.
<</matlabcentral/answers/uploaded_files/27437/2.PNG>>
But it shows some kind of error:
In the dynamically coupled component containing Revolute Joint Revolute_Joint, there are fewer joint primitive degrees of freedom with automatically computed force or torque (0) than with motion from inputs (1). The prescribed motion trajectories in this component may not be achievable. Solve this problem by reducing the number of joint primitives with motion from inputs or increasing the number of joint primitives with automatically computed force or torque. Resolve this issue in order to simulate the model.
Does anybody knows what’s wrong and what should I do to solve the problem. simulink, simmechanics MATLAB Answers — New Questions
how can I add a point in figure?
Hi,
I want to add a point to this figure. Can you please tell me how to do that ?
the point that I want to add to my figure is: x=70.00; y= 0.30. Thanks for your time.Hi,
I want to add a point to this figure. Can you please tell me how to do that ?
the point that I want to add to my figure is: x=70.00; y= 0.30. Thanks for your time. Hi,
I want to add a point to this figure. Can you please tell me how to do that ?
the point that I want to add to my figure is: x=70.00; y= 0.30. Thanks for your time. matlab, figure MATLAB Answers — New Questions
How to Report Who Shared What File From SharePoint Online Sites
Filter, Refine, and Report File Sharing Events from the Audit Log
A recent article about auditing file sharing activities in Teams generated some questions. The accompanying script searches for FileUploaded events, which have nothing to do with sharing. SharePoint Online captures FileUploaded events when users create new files in SharePoint sites.
In any case, after reading the article, it makes a case to keep an eye on files uploaded to Teams channels because it’s possible that someone might share information that results in a data leak. It’s a tenuous proposal that only makes sense in a weird sort of way. I am not saying that no one has never uploaded a file to a Teams channel that they shouldn’t have. Some mistakes will happen given that people create billions of files in SharePoint Online daily. But the sheer volume of FileUploaded events created in the unified audit log means that a simple report detailing these events is never going to be valuable. Filtering and analysis are required to extract value.
Most file activity logged by SharePoint Online is innocuous. To find value in the audit log, administrators need to know the data they want to find. As an example, it seems like it would be good to know who shares files from SharePoint Online, both through Teams and the SharePoint browser interface, and who they share the files with (internal and external).
Microsoft documents how to use audit data to track sharing activities. There’s lots of good information in that article to help you understand how SharePoint Online generates the content of the audit events generated to track sharing activities.
Finding File Sharing Events in the Audit Log
When I begin to figure out what audit data might be valuable for investigative purposes, I usually use several accounts to perform the activities I’m interested in (in this case, sharing documents), wait about 30 minutes, and then go through the events that turn up in the audit log. Searching the audit log with a command like this returns SharePoint sharing events. Make sure that the start and end dates are limited to the period when the actions of interest occur:
[array]$Records = Search-UnifiedAuditLog -StartDate '2-Apr-2025 19:00' -EndDate (Get-Date) -Formatted -SessionCommand ReturnLargeSet -ResultSize 5000 -RecordType SharepointSharingOperation
Analyzing the audit data revealed that SharingSet events happen to set up a sharing link. UserExpirationChanged events are also found if the sharing link policy sets expiration dates for sharing links. If you cast the audit net wider and look for other events, you’ll also find Send events logged when SharePoint Online sends notification messages to inform people that someone has shared a file with them.
Filtering File Sharing Events
The audit log is a rich source of information that can be overwhelming because of the amount of logged data. When searching for answers, it’s important to focus. In this instance, I extracted only the SharingSet events and then filtered the returned set to remove sharing events that I wasn’t interested in. These events included:
- Sharing for SharePoint embedded applications such as Loop and Outlook Newsletters.
- Sharing performed by the background app@sharepoint app. For instance, when SharePoint Online shares the recording of a Teams meeting (stored in the OneDrive of the meeting organizer) with meeting participants.
- Sharing set operations to adjust SharePoint lists. When a user shares a document, SharePoint Online adjusts the group that controls access to that item within the site, which results in audit events being logged for groups like “Limited access system group for list.” A Microsoft article covers permission levels and explains what these groups mean.
Essentially, the only sharing events I am interested in are those involving member and guest Entra ID accounts (i.e., humans).
The lesson here is that retrieving a set of events from the audit log seldom delivers useful results. It’s usually the first stage in a process to remove unwanted events to focus on the valuable information.
Parsing and Reporting Sharing Audit Events
The next step is to parse the information contained in the remaining audit events to answer the questions who shared what with whom and what level of access did they grant? Most of this information is hidden in plain sight in the AuditData property of audit events. The data must be extracted, cleaned up, and enhanced.
For example, if your organization uses sensitivity labels to protect files (and you should), the audit events note the GUID of the label applied to the shared file and the GUID of the label applied to the host site (container management label). Resolving the GUIDs to label names makes this information more accessible. Knowing that a shared file has a sensitivity that will block unauthorized access is always a nice feeling.
The result is a report of file sharing events (Figure 1) that answers the question of who shared files from SharePoint Online with whom and what access was granted.

In addition, because the script extracts the email addresses of sharees, you can analyze the volume of sharing to external domains:
$AuditReport | Group-Object TargetDomain -NoElement | Sort-Object Count -Descending | Format-Table Name, Count Name Count ---- ----- microsoft.com 11 o365maestro.onmicrosoft.com 4 contoso.com 2 proton.me 1
Report File Sharing Events to Meet Your Requirements
Like anything published on the internet, the script (available from GitHub) might or might not satisfy your requirements. But it’s PowerShell, so you can change the code to meet your needs. I used the Graph AuditLog Query API to retrieve audit data. The same data is available by running the Search-UnifiedAuditLog cmdlet.
The takeaway is that real value is seldom extracted from audit logs without some additional processing to refine, filter, and interpret the information. Articles that merely extract and report audit data don’t add much value because they don’t tell the full story and reveal the actionable data that administrators need.
Support the work of the Office 365 for IT Pros team by subscribing to the Office 365 for IT Pros eBook. Your support pays for the time we need to track, analyze, and document the changing world of Microsoft 365 and Office 365.
Nonlinear Curve fitting with integrals
I encountered a nonlinear fitting problem, and the fitting formula is shown in Equation (1), which includes two infinite integrals (in practice, the integration range can be set from 0.01E-6 to 200E-6).
In these formulas, except for x and y being vectors, all other variables are scalars, and Rmedian and sigma are the parameters to be fitted.
I found a related post and tried to write the code based on it, but it keeps reporting errors. The error message seems to indicate that the vector dimensions are inconsistent, preventing the operation from proceeding. However, these functions are all calculations for individual scalars.
Error using /
Matrix dimensions must agree.
Error in dsdmain>@(r)1/(2*r*sigma*sqrt(2*pi))*exp(-(log(2*r)-log(2*Rmean)^2)/(2*sigma^2)) (line 13)
gauss = @(r) 1/(2*r*sigma*sqrt(2*pi))* exp( -(log(2*r)-log(2*Rmean)^2)/(2*sigma^2) );
My question is: Can I refer to the content of this post to solve my problem? If yes, what does this error message mean? If not, how should I resolve my problem? (Note: The range of Rmedian is 1E-6 to 5E-6)
After modifying the code according to Walter Roberson and Torsten’s suggestion, the program no longer throws errors. But no matter how I set the initial values on my end, it always prompts:
Initial point is a local minimum.
Optimization completed because the size of the gradient at the initial point
is less than the default value of the optimality tolerance.
<stopping criteria details>
theta = initial point
1.0e-05 *
0.2000
0.1000
Optimization completed: The final point is the initial point.
The first-order optimality measure, 0.000000e+00, is less than
options.OptimalityTolerance = 1.000000e-06.
Optimization Metric Options
relative first-order optimality = 0.00e+00 OptimalityTolerance = 1e-06 (default)
I have checked all the formulas and the units of the variables, and I didn’t find any problems.
————————————– Beblow is my code for issue reproduction ———————————————————-
function testmain()
clc
function Kvec = model(param,xdata)
% Vector of Kd for every xdata:
Kvec = zeros(size(xdata));
Rmean = param(1);
Rstd = param(2);
for i = 1:length(xdata)
model = @(r) unified(xdata(i),r,Rmean,Rstd,delta,Delta,D,lambda);
Kvec(i) = integral(model,0.01E-6,200E-6);
end
end
function s = unified(g,R,Rmean,Rstd,delta,Delta,D,lambda)
%unified Unified fitting model for DSD
% exponentional combined
factor = 1./(2*R*Rstd*sqrt(2*pi)) *2 ; % int(P(r)) = 0.5,1/0.5=2
p1 = -(log(2*R)-log(2*Rmean)).^2/(2*Rstd^2);
c = -2*2.675E8^2*g.^2/D;
tmp = 0;
for il = 1:length(lambda)
a2 = (lambda(il)./R).^2;
an4 = (R/lambda(il)).^4;
Psi = 2+exp(-a2*D*(Delta-delta))-2*exp(-a2*D*delta)-2*exp(-a2*D*Delta)+exp(-a2*D*(Delta+delta));
tmp = tmp+an4./(a2.*R.*R-2).*(2*delta-Psi./(a2*D));
end
p2 = c*tmp;
s = factor.*exp(p1+p2);
end
Delta = 0.075;
delta = 0.002;
D = 0.098E-9;
lambda = [2.0816 5.9404 9.2058 12.4044 15.5792 18.7426 21.8997 25.0528 28.2034 31.3521];
g = [ 0.300616, 0.53884, 0.771392, 1.009616, 1.24784, 1.480392, 1.718616, 1.95684, 2.189392, 2.427616, 2.66584, 2.898392 ];
xdata = g;
ydata = [100, 91.16805426, 80.52955192, 67.97705378, 55.1009735,41.87307917, 30.39638776, 21.13515607, 13.7125649, 8.33083767, 5.146756077, 2.79768609];
ydata = ydata/ydata(1); % normalize
% Inital guess for parameters:
Rmean0 = 2E-6;
Rstd0 = 1E-6;
p0 = [Rmean0;Rstd0];
% lsqcurvefit is in the optimization toolbox.
% fit, from the curve fitting toolbox may be an alternative
theta = lsqcurvefit(@model,p0,xdata,ydata,[0.01E-6;0.1E-6],[10E-6,2E-6])
endI encountered a nonlinear fitting problem, and the fitting formula is shown in Equation (1), which includes two infinite integrals (in practice, the integration range can be set from 0.01E-6 to 200E-6).
In these formulas, except for x and y being vectors, all other variables are scalars, and Rmedian and sigma are the parameters to be fitted.
I found a related post and tried to write the code based on it, but it keeps reporting errors. The error message seems to indicate that the vector dimensions are inconsistent, preventing the operation from proceeding. However, these functions are all calculations for individual scalars.
Error using /
Matrix dimensions must agree.
Error in dsdmain>@(r)1/(2*r*sigma*sqrt(2*pi))*exp(-(log(2*r)-log(2*Rmean)^2)/(2*sigma^2)) (line 13)
gauss = @(r) 1/(2*r*sigma*sqrt(2*pi))* exp( -(log(2*r)-log(2*Rmean)^2)/(2*sigma^2) );
My question is: Can I refer to the content of this post to solve my problem? If yes, what does this error message mean? If not, how should I resolve my problem? (Note: The range of Rmedian is 1E-6 to 5E-6)
After modifying the code according to Walter Roberson and Torsten’s suggestion, the program no longer throws errors. But no matter how I set the initial values on my end, it always prompts:
Initial point is a local minimum.
Optimization completed because the size of the gradient at the initial point
is less than the default value of the optimality tolerance.
<stopping criteria details>
theta = initial point
1.0e-05 *
0.2000
0.1000
Optimization completed: The final point is the initial point.
The first-order optimality measure, 0.000000e+00, is less than
options.OptimalityTolerance = 1.000000e-06.
Optimization Metric Options
relative first-order optimality = 0.00e+00 OptimalityTolerance = 1e-06 (default)
I have checked all the formulas and the units of the variables, and I didn’t find any problems.
————————————– Beblow is my code for issue reproduction ———————————————————-
function testmain()
clc
function Kvec = model(param,xdata)
% Vector of Kd for every xdata:
Kvec = zeros(size(xdata));
Rmean = param(1);
Rstd = param(2);
for i = 1:length(xdata)
model = @(r) unified(xdata(i),r,Rmean,Rstd,delta,Delta,D,lambda);
Kvec(i) = integral(model,0.01E-6,200E-6);
end
end
function s = unified(g,R,Rmean,Rstd,delta,Delta,D,lambda)
%unified Unified fitting model for DSD
% exponentional combined
factor = 1./(2*R*Rstd*sqrt(2*pi)) *2 ; % int(P(r)) = 0.5,1/0.5=2
p1 = -(log(2*R)-log(2*Rmean)).^2/(2*Rstd^2);
c = -2*2.675E8^2*g.^2/D;
tmp = 0;
for il = 1:length(lambda)
a2 = (lambda(il)./R).^2;
an4 = (R/lambda(il)).^4;
Psi = 2+exp(-a2*D*(Delta-delta))-2*exp(-a2*D*delta)-2*exp(-a2*D*Delta)+exp(-a2*D*(Delta+delta));
tmp = tmp+an4./(a2.*R.*R-2).*(2*delta-Psi./(a2*D));
end
p2 = c*tmp;
s = factor.*exp(p1+p2);
end
Delta = 0.075;
delta = 0.002;
D = 0.098E-9;
lambda = [2.0816 5.9404 9.2058 12.4044 15.5792 18.7426 21.8997 25.0528 28.2034 31.3521];
g = [ 0.300616, 0.53884, 0.771392, 1.009616, 1.24784, 1.480392, 1.718616, 1.95684, 2.189392, 2.427616, 2.66584, 2.898392 ];
xdata = g;
ydata = [100, 91.16805426, 80.52955192, 67.97705378, 55.1009735,41.87307917, 30.39638776, 21.13515607, 13.7125649, 8.33083767, 5.146756077, 2.79768609];
ydata = ydata/ydata(1); % normalize
% Inital guess for parameters:
Rmean0 = 2E-6;
Rstd0 = 1E-6;
p0 = [Rmean0;Rstd0];
% lsqcurvefit is in the optimization toolbox.
% fit, from the curve fitting toolbox may be an alternative
theta = lsqcurvefit(@model,p0,xdata,ydata,[0.01E-6;0.1E-6],[10E-6,2E-6])
end I encountered a nonlinear fitting problem, and the fitting formula is shown in Equation (1), which includes two infinite integrals (in practice, the integration range can be set from 0.01E-6 to 200E-6).
In these formulas, except for x and y being vectors, all other variables are scalars, and Rmedian and sigma are the parameters to be fitted.
I found a related post and tried to write the code based on it, but it keeps reporting errors. The error message seems to indicate that the vector dimensions are inconsistent, preventing the operation from proceeding. However, these functions are all calculations for individual scalars.
Error using /
Matrix dimensions must agree.
Error in dsdmain>@(r)1/(2*r*sigma*sqrt(2*pi))*exp(-(log(2*r)-log(2*Rmean)^2)/(2*sigma^2)) (line 13)
gauss = @(r) 1/(2*r*sigma*sqrt(2*pi))* exp( -(log(2*r)-log(2*Rmean)^2)/(2*sigma^2) );
My question is: Can I refer to the content of this post to solve my problem? If yes, what does this error message mean? If not, how should I resolve my problem? (Note: The range of Rmedian is 1E-6 to 5E-6)
After modifying the code according to Walter Roberson and Torsten’s suggestion, the program no longer throws errors. But no matter how I set the initial values on my end, it always prompts:
Initial point is a local minimum.
Optimization completed because the size of the gradient at the initial point
is less than the default value of the optimality tolerance.
<stopping criteria details>
theta = initial point
1.0e-05 *
0.2000
0.1000
Optimization completed: The final point is the initial point.
The first-order optimality measure, 0.000000e+00, is less than
options.OptimalityTolerance = 1.000000e-06.
Optimization Metric Options
relative first-order optimality = 0.00e+00 OptimalityTolerance = 1e-06 (default)
I have checked all the formulas and the units of the variables, and I didn’t find any problems.
————————————– Beblow is my code for issue reproduction ———————————————————-
function testmain()
clc
function Kvec = model(param,xdata)
% Vector of Kd for every xdata:
Kvec = zeros(size(xdata));
Rmean = param(1);
Rstd = param(2);
for i = 1:length(xdata)
model = @(r) unified(xdata(i),r,Rmean,Rstd,delta,Delta,D,lambda);
Kvec(i) = integral(model,0.01E-6,200E-6);
end
end
function s = unified(g,R,Rmean,Rstd,delta,Delta,D,lambda)
%unified Unified fitting model for DSD
% exponentional combined
factor = 1./(2*R*Rstd*sqrt(2*pi)) *2 ; % int(P(r)) = 0.5,1/0.5=2
p1 = -(log(2*R)-log(2*Rmean)).^2/(2*Rstd^2);
c = -2*2.675E8^2*g.^2/D;
tmp = 0;
for il = 1:length(lambda)
a2 = (lambda(il)./R).^2;
an4 = (R/lambda(il)).^4;
Psi = 2+exp(-a2*D*(Delta-delta))-2*exp(-a2*D*delta)-2*exp(-a2*D*Delta)+exp(-a2*D*(Delta+delta));
tmp = tmp+an4./(a2.*R.*R-2).*(2*delta-Psi./(a2*D));
end
p2 = c*tmp;
s = factor.*exp(p1+p2);
end
Delta = 0.075;
delta = 0.002;
D = 0.098E-9;
lambda = [2.0816 5.9404 9.2058 12.4044 15.5792 18.7426 21.8997 25.0528 28.2034 31.3521];
g = [ 0.300616, 0.53884, 0.771392, 1.009616, 1.24784, 1.480392, 1.718616, 1.95684, 2.189392, 2.427616, 2.66584, 2.898392 ];
xdata = g;
ydata = [100, 91.16805426, 80.52955192, 67.97705378, 55.1009735,41.87307917, 30.39638776, 21.13515607, 13.7125649, 8.33083767, 5.146756077, 2.79768609];
ydata = ydata/ydata(1); % normalize
% Inital guess for parameters:
Rmean0 = 2E-6;
Rstd0 = 1E-6;
p0 = [Rmean0;Rstd0];
% lsqcurvefit is in the optimization toolbox.
% fit, from the curve fitting toolbox may be an alternative
theta = lsqcurvefit(@model,p0,xdata,ydata,[0.01E-6;0.1E-6],[10E-6,2E-6])
end curve fitting, integral MATLAB Answers — New Questions
Is comm.FMDemodulator a product detection method?
I looked at the algorithm of comm.FMDemodulator, and I have a question: comm.FMDemodulator does not require the input of the carrier frequency (fc). So how does it perform the step ys(t)=Y(t)e−j2πfcty_s(t) = Y(t) e^{-j 2pi f_c t}ys(t)=Y(t)e−j2πfct? Or is it using another method, such as phase demodulation?I looked at the algorithm of comm.FMDemodulator, and I have a question: comm.FMDemodulator does not require the input of the carrier frequency (fc). So how does it perform the step ys(t)=Y(t)e−j2πfcty_s(t) = Y(t) e^{-j 2pi f_c t}ys(t)=Y(t)e−j2πfct? Or is it using another method, such as phase demodulation? I looked at the algorithm of comm.FMDemodulator, and I have a question: comm.FMDemodulator does not require the input of the carrier frequency (fc). So how does it perform the step ys(t)=Y(t)e−j2πfcty_s(t) = Y(t) e^{-j 2pi f_c t}ys(t)=Y(t)e−j2πfct? Or is it using another method, such as phase demodulation? fmdemod MATLAB Answers — New Questions
Clarification on Simulink Sample Rate vs HDL Coder Target Frequency
Hello,
I have a question regarding the relationship between Simulink sample rate and HDL Coder target frequency.
Is it okay if the Simulink sample rate is equal to the target frequency specified in HDL Coder?
I’ve often heard that the hardware clock rate (target frequency) should generally be at least 2× the sample rate to ensure proper timing and pipelining. But in Simulink and HDL Coder, I’m unsure how strict this rule is.
Specifically:
If I set my Simulink sample rate to 120 MSPS and also specify 120 MHz as the HDL Coder target frequency, is this configuration valid?
Or should the target frequency always be greater than the sample rate, even in fully pipelined or parallel architectures?
I’d really appreciate any clarification on how these two values interact and whether matching them could lead to timing issues during synthesis or implementation.
Thanks in advance!Hello,
I have a question regarding the relationship between Simulink sample rate and HDL Coder target frequency.
Is it okay if the Simulink sample rate is equal to the target frequency specified in HDL Coder?
I’ve often heard that the hardware clock rate (target frequency) should generally be at least 2× the sample rate to ensure proper timing and pipelining. But in Simulink and HDL Coder, I’m unsure how strict this rule is.
Specifically:
If I set my Simulink sample rate to 120 MSPS and also specify 120 MHz as the HDL Coder target frequency, is this configuration valid?
Or should the target frequency always be greater than the sample rate, even in fully pipelined or parallel architectures?
I’d really appreciate any clarification on how these two values interact and whether matching them could lead to timing issues during synthesis or implementation.
Thanks in advance! Hello,
I have a question regarding the relationship between Simulink sample rate and HDL Coder target frequency.
Is it okay if the Simulink sample rate is equal to the target frequency specified in HDL Coder?
I’ve often heard that the hardware clock rate (target frequency) should generally be at least 2× the sample rate to ensure proper timing and pipelining. But in Simulink and HDL Coder, I’m unsure how strict this rule is.
Specifically:
If I set my Simulink sample rate to 120 MSPS and also specify 120 MHz as the HDL Coder target frequency, is this configuration valid?
Or should the target frequency always be greater than the sample rate, even in fully pipelined or parallel architectures?
I’d really appreciate any clarification on how these two values interact and whether matching them could lead to timing issues during synthesis or implementation.
Thanks in advance! simulink, sample rate, hdl coder, target frequency MATLAB Answers — New Questions
How to shift lines to their correction positions (I need to correct figure)
I need to correct figure (3) in the following script to be similar in the attached file (with automatic way)
clc; close all; clear;
% Load the Excel data
filename = ‘ThreeFaultModel_Modified.xlsx’;
data = xlsread(filename, ‘Sheet1’);
% Extract relevant columns
x = data(:, 1); % Distance (x in km)
Field = data(:, 2); % Earth filed (Field in unit)
%==============================================================%
% Input number of layers and densities
num_blocks = input(‘Enter the number of blocks: ‘);
block_densities = zeros(1, num_blocks);
for i = 1:num_blocks
block_densities(i) = input([‘Enter the density of block ‘, num2str(i), ‘ (kg/m^3): ‘]);
end
%==============================================================%
% Constants
G = 0.00676;
Lower_density = 2.67; % in kg/m^3
%==============================================================%
% Calculate inverted depth profile for each layer
z_inv = zeros(length(x), num_blocks);
for i = 1:num_blocks
density_contrast = block_densities(i) – Lower_density;
if density_contrast ~= 0
z_inv(:, i) = Field ./ (2 * pi * G * density_contrast);
else
z_inv(:, i) = NaN; % Avoid division by zero
end
end
%==============================================================%
% Compute vertical gradient (VG) of inverted depth (clean)
VG = diff(z_inv(:, 1)) ./ diff(x);
%==============================================================%
% Set fault threshold and find f indices based on d changes
f_threshold = 0.5; % Threshold for identifying significant d changes
f_indices = find(abs(diff(z_inv(:, 1))) > f_threshold);
%==============================================================%
% Initialize f locations and dip arrays
%==============================================================%
f_locations = x(f_indices); % Automatically determined f locations
f_dip_angles = nan(size(f_indices)); % Placeholder for calculated dip
% Calculate dip for each identified f
for i = 1:length(f_indices)
idx = f_indices(i);
if idx < length(x)
f_dip_angles(i) = atand(abs(z_inv(idx + 1, 1) – z_inv(idx, 1)) / (x(idx + 1) – x(idx)));
else
f_dip_angles(i) = atand(abs(z_inv(idx, 1) – z_inv(idx – 1, 1)) / (x(idx) – x(idx – 1)));
end
end
%==============================================================%
% Displacement of faults
%==============================================================%
D_faults = zeros(size(f_dip_angles));
for i = 1:length(f_indices)
idx = f_indices(i);
dip_angle_rad = deg2rad(f_dip_angles(i)); % Convert dip to radians
D_faults(i) = abs(z_inv(idx + 1, 1) – z_inv(idx, 1)) / sin(dip_angle_rad);
end
% Assign displacement values
D1 = D_faults(1); % NF displacemen
D2 = D_faults(2); % VF displacement
D3 = D_faults(3); % RF displacement
%==============================================================%
% Processing Data for Interpretation
%==============================================================%
A = [x Field z_inv]; % New Data Obtained
col_names = {‘x’, ‘Field’};
for i = 1:num_blocks
col_names{end+1} = [‘z’, num2str(i)];
end
dataM = array2table(A, ‘VariableNames’, col_names);
t1 = dataM;
[nr, nc] = size(t1);
t1_bottoms = t1;
for jj = 3:nc
for ii = 1:nr-1
if t1_bottoms{ii, jj} ~= t1_bottoms{ii+1, jj}
t1_bottoms{ii, jj} = NaN;
end
end
end
%==============================================================%
% Identifying NaN rows
%==============================================================%
nans = isnan(t1_bottoms{:, 3:end});
nan_rows = find(any(nans, 2));
xc = t1_bottoms{nan_rows, 1}; % Corrected x-coordinates
yc = zeros(numel(nan_rows), 1); % y-coordinates for NaN rows
for ii = 1:numel(nan_rows)
idx = find(~nans(nan_rows(ii), :), 1, ‘last’);
if isempty(idx)
yc(ii) = 0;
else
yc(ii) = t1_bottoms{nan_rows(ii), idx+2};
end
end
%==============================================================%
% Plot f Interpretation
%==============================================================%
figure(1)
plot(A(:, 1), A(:, 3:end))
hold on
grid on
set(gca, ‘YDir’, ‘reverse’)
xlabel(‘Distance (km)’);
ylabel(‘D (km)’);
title(‘Interpretation of profile data model’)
%==============================================================%
figure(2)
hold on
plot(t1_bottoms{:, 1}, t1_bottoms{:, 3:end}, ‘LineWidth’, 1)
set(gca, ‘YDir’, ‘reverse’)
box on
grid on
xlabel(‘Distance (km)’);
ylabel(‘Ds (km)’);
title(‘New interpretation of profile data’)
%==============================================================%
% Plot the interpreted d profiles
figure(3)
hold on
% Plot the interpreted d profiles
plot(t1_bottoms{:, 1}, t1_bottoms{:, 3:end}, ‘LineWidth’, 1)
yl = get(gca, ‘YLim’); % Get Y-axis limits
% Define f locations and corresponding dip
f_locations = [7.00, 14.00, 23.00];
d_angles = [58.47, 90.00, -69.79];
for ii = 1:numel(f_locations)
% Find the nearest x index for each fault location
[~, idx] = min(abs(t1_bottoms{:, 1} – f_locations(ii)));
% Get the starting x and y coordinates (fault starts at the surface)
x_f = t1_bottoms{idx, 1};
y_f = yl(1); % Start at the surface (0 km depth)
% Check if the dip angle is 90° (vf)
if d_angles(ii) == 90
x_r = [x_f x_f]; % Vertical line
y_r = [yl(1), yl(2)]; % From surface to de limit
else
% Convert dip to slope (m = tan(angle))
m = tand(d_angles(ii));
% Define the x range for fault line
% x_r = linspace(x_f – 5, x_f + 5, 100); % Extend 5 km on each side
x_r = x;
y_r = y_f – m * (x_r – x_f); % Line equation (+ m)
% Clip y_range within the plot limits
y_r(y_r > yl(2)) = yl(2);
y_r(y_r < yl(1)) = yl(1);
end
% Plot the fault lines in black (matching the image)
plot(x_r, y_r, ‘k’, ‘LineWidth’, 3)
% Display dip angles as text near the faults
% text(x_f, y_f + 1, sprintf(‘\theta = %.2f°’, d_angles(ii)), …
% ‘Color’, ‘k’, ‘FontSize’, 10, ‘FontWeight’, ‘bold’, ‘HorizontalAlignment’, ‘right’)
end
set(gca, ‘YDir’, ‘reverse’) % Reverse Y-axis for depth representation
box on
grid on
xlabel(‘Distance (km)’);
ylabel(‘D (km)’);
title(‘New Interpretation of Profile Data’)
%==============================================================%
% Plotting results
figure;
subplot(3,1,1);
plot(x, Field, ‘r’, ‘LineWidth’, 2);
title(‘Field Profile’);
xlabel(‘Distance (km)’);
ylabel(‘Field (unit)’);
grid on;
subplot(3,1,2);
plot(x(1:end-1), VG, ‘b’, ‘LineWidth’, 1.5);
xlabel(‘Distance (km)’);
ylabel(‘VG (munit/km)’);
title(‘VG Gradient’);
grid on;
subplot(3,1,3);
hold on;
for i = 1:num_blocks
plot(x, z_inv(:, i), ‘LineWidth’, 2);
end
title(‘Inverted D Profile for Each Block’);
xlabel(‘Distance (km)’);
ylabel(‘D (km)’);
set(gca, ‘YDir’, ‘reverse’);
grid on;
%==============================================================%
% Display results
fprintf(‘F Analysis Results:n’);
fprintf(‘Location (km) | Dip (degrees) | D_faults (km)n’);
for i = 1:length(f_locations)
fprintf(‘%10.2f | %17.2f | %10.2fn’, f_locations(i), f_dip_angles(i), D_faults(i));
end
% Ensure both variables are column vectors of the same size
f_locations = f_locations(:); % Convert to column vector
f_dip_angles = f_dip_angles(:); % Convert to column vector
% Concatenate and write to Excel
xlswrite(‘f_analysis_results.xlsx’, [f_locations, f_dip_angles]);
% Save results to Excel (only if fs are detected)
if ~isempty(f_locations)
xlswrite(‘f_analysis_results.xlsx’, [f_locations, f_dip_angles]);
else
warning(‘No significant fs detected.’);
endI need to correct figure (3) in the following script to be similar in the attached file (with automatic way)
clc; close all; clear;
% Load the Excel data
filename = ‘ThreeFaultModel_Modified.xlsx’;
data = xlsread(filename, ‘Sheet1’);
% Extract relevant columns
x = data(:, 1); % Distance (x in km)
Field = data(:, 2); % Earth filed (Field in unit)
%==============================================================%
% Input number of layers and densities
num_blocks = input(‘Enter the number of blocks: ‘);
block_densities = zeros(1, num_blocks);
for i = 1:num_blocks
block_densities(i) = input([‘Enter the density of block ‘, num2str(i), ‘ (kg/m^3): ‘]);
end
%==============================================================%
% Constants
G = 0.00676;
Lower_density = 2.67; % in kg/m^3
%==============================================================%
% Calculate inverted depth profile for each layer
z_inv = zeros(length(x), num_blocks);
for i = 1:num_blocks
density_contrast = block_densities(i) – Lower_density;
if density_contrast ~= 0
z_inv(:, i) = Field ./ (2 * pi * G * density_contrast);
else
z_inv(:, i) = NaN; % Avoid division by zero
end
end
%==============================================================%
% Compute vertical gradient (VG) of inverted depth (clean)
VG = diff(z_inv(:, 1)) ./ diff(x);
%==============================================================%
% Set fault threshold and find f indices based on d changes
f_threshold = 0.5; % Threshold for identifying significant d changes
f_indices = find(abs(diff(z_inv(:, 1))) > f_threshold);
%==============================================================%
% Initialize f locations and dip arrays
%==============================================================%
f_locations = x(f_indices); % Automatically determined f locations
f_dip_angles = nan(size(f_indices)); % Placeholder for calculated dip
% Calculate dip for each identified f
for i = 1:length(f_indices)
idx = f_indices(i);
if idx < length(x)
f_dip_angles(i) = atand(abs(z_inv(idx + 1, 1) – z_inv(idx, 1)) / (x(idx + 1) – x(idx)));
else
f_dip_angles(i) = atand(abs(z_inv(idx, 1) – z_inv(idx – 1, 1)) / (x(idx) – x(idx – 1)));
end
end
%==============================================================%
% Displacement of faults
%==============================================================%
D_faults = zeros(size(f_dip_angles));
for i = 1:length(f_indices)
idx = f_indices(i);
dip_angle_rad = deg2rad(f_dip_angles(i)); % Convert dip to radians
D_faults(i) = abs(z_inv(idx + 1, 1) – z_inv(idx, 1)) / sin(dip_angle_rad);
end
% Assign displacement values
D1 = D_faults(1); % NF displacemen
D2 = D_faults(2); % VF displacement
D3 = D_faults(3); % RF displacement
%==============================================================%
% Processing Data for Interpretation
%==============================================================%
A = [x Field z_inv]; % New Data Obtained
col_names = {‘x’, ‘Field’};
for i = 1:num_blocks
col_names{end+1} = [‘z’, num2str(i)];
end
dataM = array2table(A, ‘VariableNames’, col_names);
t1 = dataM;
[nr, nc] = size(t1);
t1_bottoms = t1;
for jj = 3:nc
for ii = 1:nr-1
if t1_bottoms{ii, jj} ~= t1_bottoms{ii+1, jj}
t1_bottoms{ii, jj} = NaN;
end
end
end
%==============================================================%
% Identifying NaN rows
%==============================================================%
nans = isnan(t1_bottoms{:, 3:end});
nan_rows = find(any(nans, 2));
xc = t1_bottoms{nan_rows, 1}; % Corrected x-coordinates
yc = zeros(numel(nan_rows), 1); % y-coordinates for NaN rows
for ii = 1:numel(nan_rows)
idx = find(~nans(nan_rows(ii), :), 1, ‘last’);
if isempty(idx)
yc(ii) = 0;
else
yc(ii) = t1_bottoms{nan_rows(ii), idx+2};
end
end
%==============================================================%
% Plot f Interpretation
%==============================================================%
figure(1)
plot(A(:, 1), A(:, 3:end))
hold on
grid on
set(gca, ‘YDir’, ‘reverse’)
xlabel(‘Distance (km)’);
ylabel(‘D (km)’);
title(‘Interpretation of profile data model’)
%==============================================================%
figure(2)
hold on
plot(t1_bottoms{:, 1}, t1_bottoms{:, 3:end}, ‘LineWidth’, 1)
set(gca, ‘YDir’, ‘reverse’)
box on
grid on
xlabel(‘Distance (km)’);
ylabel(‘Ds (km)’);
title(‘New interpretation of profile data’)
%==============================================================%
% Plot the interpreted d profiles
figure(3)
hold on
% Plot the interpreted d profiles
plot(t1_bottoms{:, 1}, t1_bottoms{:, 3:end}, ‘LineWidth’, 1)
yl = get(gca, ‘YLim’); % Get Y-axis limits
% Define f locations and corresponding dip
f_locations = [7.00, 14.00, 23.00];
d_angles = [58.47, 90.00, -69.79];
for ii = 1:numel(f_locations)
% Find the nearest x index for each fault location
[~, idx] = min(abs(t1_bottoms{:, 1} – f_locations(ii)));
% Get the starting x and y coordinates (fault starts at the surface)
x_f = t1_bottoms{idx, 1};
y_f = yl(1); % Start at the surface (0 km depth)
% Check if the dip angle is 90° (vf)
if d_angles(ii) == 90
x_r = [x_f x_f]; % Vertical line
y_r = [yl(1), yl(2)]; % From surface to de limit
else
% Convert dip to slope (m = tan(angle))
m = tand(d_angles(ii));
% Define the x range for fault line
% x_r = linspace(x_f – 5, x_f + 5, 100); % Extend 5 km on each side
x_r = x;
y_r = y_f – m * (x_r – x_f); % Line equation (+ m)
% Clip y_range within the plot limits
y_r(y_r > yl(2)) = yl(2);
y_r(y_r < yl(1)) = yl(1);
end
% Plot the fault lines in black (matching the image)
plot(x_r, y_r, ‘k’, ‘LineWidth’, 3)
% Display dip angles as text near the faults
% text(x_f, y_f + 1, sprintf(‘\theta = %.2f°’, d_angles(ii)), …
% ‘Color’, ‘k’, ‘FontSize’, 10, ‘FontWeight’, ‘bold’, ‘HorizontalAlignment’, ‘right’)
end
set(gca, ‘YDir’, ‘reverse’) % Reverse Y-axis for depth representation
box on
grid on
xlabel(‘Distance (km)’);
ylabel(‘D (km)’);
title(‘New Interpretation of Profile Data’)
%==============================================================%
% Plotting results
figure;
subplot(3,1,1);
plot(x, Field, ‘r’, ‘LineWidth’, 2);
title(‘Field Profile’);
xlabel(‘Distance (km)’);
ylabel(‘Field (unit)’);
grid on;
subplot(3,1,2);
plot(x(1:end-1), VG, ‘b’, ‘LineWidth’, 1.5);
xlabel(‘Distance (km)’);
ylabel(‘VG (munit/km)’);
title(‘VG Gradient’);
grid on;
subplot(3,1,3);
hold on;
for i = 1:num_blocks
plot(x, z_inv(:, i), ‘LineWidth’, 2);
end
title(‘Inverted D Profile for Each Block’);
xlabel(‘Distance (km)’);
ylabel(‘D (km)’);
set(gca, ‘YDir’, ‘reverse’);
grid on;
%==============================================================%
% Display results
fprintf(‘F Analysis Results:n’);
fprintf(‘Location (km) | Dip (degrees) | D_faults (km)n’);
for i = 1:length(f_locations)
fprintf(‘%10.2f | %17.2f | %10.2fn’, f_locations(i), f_dip_angles(i), D_faults(i));
end
% Ensure both variables are column vectors of the same size
f_locations = f_locations(:); % Convert to column vector
f_dip_angles = f_dip_angles(:); % Convert to column vector
% Concatenate and write to Excel
xlswrite(‘f_analysis_results.xlsx’, [f_locations, f_dip_angles]);
% Save results to Excel (only if fs are detected)
if ~isempty(f_locations)
xlswrite(‘f_analysis_results.xlsx’, [f_locations, f_dip_angles]);
else
warning(‘No significant fs detected.’);
end I need to correct figure (3) in the following script to be similar in the attached file (with automatic way)
clc; close all; clear;
% Load the Excel data
filename = ‘ThreeFaultModel_Modified.xlsx’;
data = xlsread(filename, ‘Sheet1’);
% Extract relevant columns
x = data(:, 1); % Distance (x in km)
Field = data(:, 2); % Earth filed (Field in unit)
%==============================================================%
% Input number of layers and densities
num_blocks = input(‘Enter the number of blocks: ‘);
block_densities = zeros(1, num_blocks);
for i = 1:num_blocks
block_densities(i) = input([‘Enter the density of block ‘, num2str(i), ‘ (kg/m^3): ‘]);
end
%==============================================================%
% Constants
G = 0.00676;
Lower_density = 2.67; % in kg/m^3
%==============================================================%
% Calculate inverted depth profile for each layer
z_inv = zeros(length(x), num_blocks);
for i = 1:num_blocks
density_contrast = block_densities(i) – Lower_density;
if density_contrast ~= 0
z_inv(:, i) = Field ./ (2 * pi * G * density_contrast);
else
z_inv(:, i) = NaN; % Avoid division by zero
end
end
%==============================================================%
% Compute vertical gradient (VG) of inverted depth (clean)
VG = diff(z_inv(:, 1)) ./ diff(x);
%==============================================================%
% Set fault threshold and find f indices based on d changes
f_threshold = 0.5; % Threshold for identifying significant d changes
f_indices = find(abs(diff(z_inv(:, 1))) > f_threshold);
%==============================================================%
% Initialize f locations and dip arrays
%==============================================================%
f_locations = x(f_indices); % Automatically determined f locations
f_dip_angles = nan(size(f_indices)); % Placeholder for calculated dip
% Calculate dip for each identified f
for i = 1:length(f_indices)
idx = f_indices(i);
if idx < length(x)
f_dip_angles(i) = atand(abs(z_inv(idx + 1, 1) – z_inv(idx, 1)) / (x(idx + 1) – x(idx)));
else
f_dip_angles(i) = atand(abs(z_inv(idx, 1) – z_inv(idx – 1, 1)) / (x(idx) – x(idx – 1)));
end
end
%==============================================================%
% Displacement of faults
%==============================================================%
D_faults = zeros(size(f_dip_angles));
for i = 1:length(f_indices)
idx = f_indices(i);
dip_angle_rad = deg2rad(f_dip_angles(i)); % Convert dip to radians
D_faults(i) = abs(z_inv(idx + 1, 1) – z_inv(idx, 1)) / sin(dip_angle_rad);
end
% Assign displacement values
D1 = D_faults(1); % NF displacemen
D2 = D_faults(2); % VF displacement
D3 = D_faults(3); % RF displacement
%==============================================================%
% Processing Data for Interpretation
%==============================================================%
A = [x Field z_inv]; % New Data Obtained
col_names = {‘x’, ‘Field’};
for i = 1:num_blocks
col_names{end+1} = [‘z’, num2str(i)];
end
dataM = array2table(A, ‘VariableNames’, col_names);
t1 = dataM;
[nr, nc] = size(t1);
t1_bottoms = t1;
for jj = 3:nc
for ii = 1:nr-1
if t1_bottoms{ii, jj} ~= t1_bottoms{ii+1, jj}
t1_bottoms{ii, jj} = NaN;
end
end
end
%==============================================================%
% Identifying NaN rows
%==============================================================%
nans = isnan(t1_bottoms{:, 3:end});
nan_rows = find(any(nans, 2));
xc = t1_bottoms{nan_rows, 1}; % Corrected x-coordinates
yc = zeros(numel(nan_rows), 1); % y-coordinates for NaN rows
for ii = 1:numel(nan_rows)
idx = find(~nans(nan_rows(ii), :), 1, ‘last’);
if isempty(idx)
yc(ii) = 0;
else
yc(ii) = t1_bottoms{nan_rows(ii), idx+2};
end
end
%==============================================================%
% Plot f Interpretation
%==============================================================%
figure(1)
plot(A(:, 1), A(:, 3:end))
hold on
grid on
set(gca, ‘YDir’, ‘reverse’)
xlabel(‘Distance (km)’);
ylabel(‘D (km)’);
title(‘Interpretation of profile data model’)
%==============================================================%
figure(2)
hold on
plot(t1_bottoms{:, 1}, t1_bottoms{:, 3:end}, ‘LineWidth’, 1)
set(gca, ‘YDir’, ‘reverse’)
box on
grid on
xlabel(‘Distance (km)’);
ylabel(‘Ds (km)’);
title(‘New interpretation of profile data’)
%==============================================================%
% Plot the interpreted d profiles
figure(3)
hold on
% Plot the interpreted d profiles
plot(t1_bottoms{:, 1}, t1_bottoms{:, 3:end}, ‘LineWidth’, 1)
yl = get(gca, ‘YLim’); % Get Y-axis limits
% Define f locations and corresponding dip
f_locations = [7.00, 14.00, 23.00];
d_angles = [58.47, 90.00, -69.79];
for ii = 1:numel(f_locations)
% Find the nearest x index for each fault location
[~, idx] = min(abs(t1_bottoms{:, 1} – f_locations(ii)));
% Get the starting x and y coordinates (fault starts at the surface)
x_f = t1_bottoms{idx, 1};
y_f = yl(1); % Start at the surface (0 km depth)
% Check if the dip angle is 90° (vf)
if d_angles(ii) == 90
x_r = [x_f x_f]; % Vertical line
y_r = [yl(1), yl(2)]; % From surface to de limit
else
% Convert dip to slope (m = tan(angle))
m = tand(d_angles(ii));
% Define the x range for fault line
% x_r = linspace(x_f – 5, x_f + 5, 100); % Extend 5 km on each side
x_r = x;
y_r = y_f – m * (x_r – x_f); % Line equation (+ m)
% Clip y_range within the plot limits
y_r(y_r > yl(2)) = yl(2);
y_r(y_r < yl(1)) = yl(1);
end
% Plot the fault lines in black (matching the image)
plot(x_r, y_r, ‘k’, ‘LineWidth’, 3)
% Display dip angles as text near the faults
% text(x_f, y_f + 1, sprintf(‘\theta = %.2f°’, d_angles(ii)), …
% ‘Color’, ‘k’, ‘FontSize’, 10, ‘FontWeight’, ‘bold’, ‘HorizontalAlignment’, ‘right’)
end
set(gca, ‘YDir’, ‘reverse’) % Reverse Y-axis for depth representation
box on
grid on
xlabel(‘Distance (km)’);
ylabel(‘D (km)’);
title(‘New Interpretation of Profile Data’)
%==============================================================%
% Plotting results
figure;
subplot(3,1,1);
plot(x, Field, ‘r’, ‘LineWidth’, 2);
title(‘Field Profile’);
xlabel(‘Distance (km)’);
ylabel(‘Field (unit)’);
grid on;
subplot(3,1,2);
plot(x(1:end-1), VG, ‘b’, ‘LineWidth’, 1.5);
xlabel(‘Distance (km)’);
ylabel(‘VG (munit/km)’);
title(‘VG Gradient’);
grid on;
subplot(3,1,3);
hold on;
for i = 1:num_blocks
plot(x, z_inv(:, i), ‘LineWidth’, 2);
end
title(‘Inverted D Profile for Each Block’);
xlabel(‘Distance (km)’);
ylabel(‘D (km)’);
set(gca, ‘YDir’, ‘reverse’);
grid on;
%==============================================================%
% Display results
fprintf(‘F Analysis Results:n’);
fprintf(‘Location (km) | Dip (degrees) | D_faults (km)n’);
for i = 1:length(f_locations)
fprintf(‘%10.2f | %17.2f | %10.2fn’, f_locations(i), f_dip_angles(i), D_faults(i));
end
% Ensure both variables are column vectors of the same size
f_locations = f_locations(:); % Convert to column vector
f_dip_angles = f_dip_angles(:); % Convert to column vector
% Concatenate and write to Excel
xlswrite(‘f_analysis_results.xlsx’, [f_locations, f_dip_angles]);
% Save results to Excel (only if fs are detected)
if ~isempty(f_locations)
xlswrite(‘f_analysis_results.xlsx’, [f_locations, f_dip_angles]);
else
warning(‘No significant fs detected.’);
end horizontal shift, vertical shift, diagonal shift MATLAB Answers — New Questions
Regarding grid generation using finite difference method (*code fixed)
Hi everyone,
I’m trying to generate a uniform grid where the red lines intersect at a 45-degree angle to the horizontal lines in blue using the finite difference method in MATLAB.
However, the resulting grid is slightly off, as shown in the attached image. Intesection angle is uniform but higher than 45-degree.
I set computational domain as and a grid is generated on physical domain with coordinates .
As I want to generate a mesh described above, I aimed to solve the following PDE :
where we have (horizontal lines). The boundary conditions are .
The above equation comes down to the following:
Based on the above equation, I discretized to finite difference form (central difference for and forward differece for ).
Plus, I found the grid collapses when I set different angles (for example 90-degree (replace )).
Am I missing something in implementation?
I need your help.
Here is my code.
% Parameter settings
eta_max = 10;
xi_max = 10;
Delta_eta = 0.1;
Delta_xi = 0.1;
tol = 1e-6;
max_iterations = 100000;
% Grid size
eta_steps = floor(eta_max / Delta_eta) + 1;
xi_steps = floor(xi_max / Delta_xi) + 1;
% Initial conditions
x = zeros(eta_steps, xi_steps);
y = repmat((0:eta_steps-1)’ * Delta_eta, 1, xi_steps);
% Boundary conditions
x(1, 🙂 = linspace(0, xi_max, xi_steps);
% Finite difference method
converged = false;
iteration = 0;
while ~converged && iteration < max_iterations
iteration = iteration + 1;
max_error = 0;
for n = 1:eta_steps – 1
for m = 2:xi_steps – 1
x_xi = (x(n, m+1) – x(n, m-1)) / (2 * Delta_xi);
y_xi = (y(n, m+1) – y(n, m-1)) / (2 * Delta_xi);
new_x = x(n, m) + Delta_eta * ((pi/4) – y_xi)/(x_xi);
error_x = abs(new_x – x(n, m));
max_error = max([max_error, error_x]);
x(n + 1, m) = new_x;
end
x(n+1, xi_steps) = x(n+1, xi_steps-1) + Delta_xi;
x(n+1, 1) = x(n+1, 2) – Delta_xi;
end
if max_error < tol
converged = true;
end
end
% Plot the results
figure; hold on; axis equal; grid on;
for m = 1:xi_steps
plot(x(:, m), y(:, m), ‘r’);
end
for n = 1:eta_steps
plot(x(n, :), y(n, :), ‘b’);
end
title([‘Iterations until convergence: ‘, num2str(iteration)]);
xlabel(‘x’); ylabel(‘y’);Hi everyone,
I’m trying to generate a uniform grid where the red lines intersect at a 45-degree angle to the horizontal lines in blue using the finite difference method in MATLAB.
However, the resulting grid is slightly off, as shown in the attached image. Intesection angle is uniform but higher than 45-degree.
I set computational domain as and a grid is generated on physical domain with coordinates .
As I want to generate a mesh described above, I aimed to solve the following PDE :
where we have (horizontal lines). The boundary conditions are .
The above equation comes down to the following:
Based on the above equation, I discretized to finite difference form (central difference for and forward differece for ).
Plus, I found the grid collapses when I set different angles (for example 90-degree (replace )).
Am I missing something in implementation?
I need your help.
Here is my code.
% Parameter settings
eta_max = 10;
xi_max = 10;
Delta_eta = 0.1;
Delta_xi = 0.1;
tol = 1e-6;
max_iterations = 100000;
% Grid size
eta_steps = floor(eta_max / Delta_eta) + 1;
xi_steps = floor(xi_max / Delta_xi) + 1;
% Initial conditions
x = zeros(eta_steps, xi_steps);
y = repmat((0:eta_steps-1)’ * Delta_eta, 1, xi_steps);
% Boundary conditions
x(1, 🙂 = linspace(0, xi_max, xi_steps);
% Finite difference method
converged = false;
iteration = 0;
while ~converged && iteration < max_iterations
iteration = iteration + 1;
max_error = 0;
for n = 1:eta_steps – 1
for m = 2:xi_steps – 1
x_xi = (x(n, m+1) – x(n, m-1)) / (2 * Delta_xi);
y_xi = (y(n, m+1) – y(n, m-1)) / (2 * Delta_xi);
new_x = x(n, m) + Delta_eta * ((pi/4) – y_xi)/(x_xi);
error_x = abs(new_x – x(n, m));
max_error = max([max_error, error_x]);
x(n + 1, m) = new_x;
end
x(n+1, xi_steps) = x(n+1, xi_steps-1) + Delta_xi;
x(n+1, 1) = x(n+1, 2) – Delta_xi;
end
if max_error < tol
converged = true;
end
end
% Plot the results
figure; hold on; axis equal; grid on;
for m = 1:xi_steps
plot(x(:, m), y(:, m), ‘r’);
end
for n = 1:eta_steps
plot(x(n, :), y(n, :), ‘b’);
end
title([‘Iterations until convergence: ‘, num2str(iteration)]);
xlabel(‘x’); ylabel(‘y’); Hi everyone,
I’m trying to generate a uniform grid where the red lines intersect at a 45-degree angle to the horizontal lines in blue using the finite difference method in MATLAB.
However, the resulting grid is slightly off, as shown in the attached image. Intesection angle is uniform but higher than 45-degree.
I set computational domain as and a grid is generated on physical domain with coordinates .
As I want to generate a mesh described above, I aimed to solve the following PDE :
where we have (horizontal lines). The boundary conditions are .
The above equation comes down to the following:
Based on the above equation, I discretized to finite difference form (central difference for and forward differece for ).
Plus, I found the grid collapses when I set different angles (for example 90-degree (replace )).
Am I missing something in implementation?
I need your help.
Here is my code.
% Parameter settings
eta_max = 10;
xi_max = 10;
Delta_eta = 0.1;
Delta_xi = 0.1;
tol = 1e-6;
max_iterations = 100000;
% Grid size
eta_steps = floor(eta_max / Delta_eta) + 1;
xi_steps = floor(xi_max / Delta_xi) + 1;
% Initial conditions
x = zeros(eta_steps, xi_steps);
y = repmat((0:eta_steps-1)’ * Delta_eta, 1, xi_steps);
% Boundary conditions
x(1, 🙂 = linspace(0, xi_max, xi_steps);
% Finite difference method
converged = false;
iteration = 0;
while ~converged && iteration < max_iterations
iteration = iteration + 1;
max_error = 0;
for n = 1:eta_steps – 1
for m = 2:xi_steps – 1
x_xi = (x(n, m+1) – x(n, m-1)) / (2 * Delta_xi);
y_xi = (y(n, m+1) – y(n, m-1)) / (2 * Delta_xi);
new_x = x(n, m) + Delta_eta * ((pi/4) – y_xi)/(x_xi);
error_x = abs(new_x – x(n, m));
max_error = max([max_error, error_x]);
x(n + 1, m) = new_x;
end
x(n+1, xi_steps) = x(n+1, xi_steps-1) + Delta_xi;
x(n+1, 1) = x(n+1, 2) – Delta_xi;
end
if max_error < tol
converged = true;
end
end
% Plot the results
figure; hold on; axis equal; grid on;
for m = 1:xi_steps
plot(x(:, m), y(:, m), ‘r’);
end
for n = 1:eta_steps
plot(x(n, :), y(n, :), ‘b’);
end
title([‘Iterations until convergence: ‘, num2str(iteration)]);
xlabel(‘x’); ylabel(‘y’); finite difference method, grid generation, mesh generation, numerical analysis, differential equations MATLAB Answers — New Questions
Microsoft Defender for Office 365 Exposes Bad Links in Email Preview
Recent Change Opens Door to Malicious Links Viewed in Email Preview
I receive many messages from readers about different aspects of Microsoft 365. To be honest, I usually don’t have much time to devote to these queries unless it’s an interesting topic. Hearing about a Microsoft 365 component that allows administrators to click links that are known to lead to bad destinations certainly fell into that category, especially when the communication comes from an experienced Security Operations (SecOps) practitioner.
Threat Explorer and Message Views
The Threat Explorer is part of Microsoft Defender for Office 365. It’s a tool to help the SecOps team understand the level of threat flowing into a tenant through email. The Explorer has multiple views to allow administrators select different sets of messages such as malicious messages blocked for different reasons. An All Email view is also available to show both bad and good messages delivered to a tenant. Even though it shows “all email,” this view could do with some filtering because it includes messages like public folder hierarchy synchronization traffic.
Figure 1 shows the Threat Explorer listing messages blocked for phishing. The details of the selected message are shown in the right-hand panel. The message purports to come from Charles Schwab. Two of the URLs in the message are for the real Charles Schwab site. The other is planted to bring unsuspecting users to the attacker’s site.

Using Email Entity and Email Preview for Investigations
The Threat Explorer also includes several tools to help SecOps investigate threat. To see more detail about the bad message, an investigator can open the email entity to view more details about the message and any attachments. One of the options that then becomes available in the Take Action menu is to view an email preview. Seeing how a malicious message presents itself to a recipient is invaluable information because it reveals how the attacker sets their trap for the unwary.
In this instance, the malicious message looks as if it could have come from the purported sender (Figure 2). The real links to pages on the Charles Schwab site are mixed in with the links to the attacker’s site (accessed from the Review Now button and Log In link).

Here’s where the strange aspect arises. The links to the attacker’s site are live and can be clicked on to bring the investigator to that site. On the one hand, this seems reasonable because an investigator is doing their job to follow the trail as far as possible. Skilled investigator will protect their workstation against malicious attack and will take great care when accessing bad links.
The problem is not with security investigators. It arises when people who are possibly less skilled in terms of security tools and forensics or less aware of how malware can infect a workstation clicks a live and potentially dangerous link. Clicking a link opens a connection between the workstation and the target site. Because the email preview page uses a https://security.microsoft.com/emailpreview URL, VPN backhauling is often ignored, and the traffic goes direct to the attacker site.
Recent Change Enabled Bad Links in Email Preview
The odd thing is that Microsoft appears to have enabled the ability to use these links only recently. In the past, Defender used two versions of the email preview page: one was static without links; the other showed link details if you hovered over a link but the link was not clickable. Microsoft’s documentation makes no mention of the danger of clicking active links to attacker sites and there’s no trace that I can find of an announcement explaining why Defender now enables malicious links. Given Microsoft’s current focus on tightening security in every product, it just doesn’t make sense to make it easier for people to connect to sites that Defender has (usually correctly) identified as problematic and a potential source of infection.
My correspondent told me that he reported the issue to Microsoft. The support response was that the links are protected by the Safe Links feature and no problems arise if you use a private browsing session or replace Edge with Firefox. It’s a curiously passive position that basically says that it’s OK to keep dangerous stuff around if you take steps to protect yourself’ Safe Links allowed me to click the bad link today. Enough said.
So much change, all the time. It’s a challenge to stay abreast of all the updates Microsoft makes across the Microsoft 365 ecosystem. Subscribe to the Office 365 for IT Pros eBook to receive monthly insights into what happens, why it happens, and what new features and capabilities mean for your tenant.
Matlab throwing an error in the sign in window
I installed MATLAB R2024b and on opening it, the sign in window is showing the following. I have 2023a installed and it is working fine but I am not able to open 2024b due to this. Please help!I installed MATLAB R2024b and on opening it, the sign in window is showing the following. I have 2023a installed and it is working fine but I am not able to open 2024b due to this. Please help! I installed MATLAB R2024b and on opening it, the sign in window is showing the following. I have 2023a installed and it is working fine but I am not able to open 2024b due to this. Please help! sign-in MATLAB Answers — New Questions