Category: News
Prevent changing values in GUI
Dear firends,
I have a querry reagrding the GUI formation in MATLAB. I have made a GUI and everything is ok but the values of tab inside the listerner called as ‘angle’ can be changed with edit filed and slider, which has limit obj.angle = [-10,10].
During static code given below also I have made the condtion ‘if (obj.angle ~= obj.angle) & obj.angle<10 & obj.angle>-10’. Hence the static code is not executed if the slider and editfield is out of range of given limit.
But the problem is if I go inside the angle.Value and write some number not in range or limit of angle then even though the static code is not excuted but the value inside angle.Value is updated and stored. I dont want this.
Hence, I request to please suggest some way by which I can prevent the value from being updated manually inside the angle.Value. And the angle.Value should have the same limit as it has when input is given through slider or edit field.
I request to please help.
Thanking all,
Kind regards
classdef classErowin < handle
properties
fig
serial_port
UI
angle
end
properties (SetObservable, AbortSet) % for listeners
angle
end
methods
function obj = class() % constructor
obj.angle = [-10,10]
edit_filed
slider
end
end
methods (Static)
function handlePropEvents(src,evnt)
obj = evnt.AffectedObject;
switch src.Name
case {‘angle’}
if (obj.angle ~= obj.angle) & obj.angle<10 & obj.angle>-10
end
end
end
end
endDear firends,
I have a querry reagrding the GUI formation in MATLAB. I have made a GUI and everything is ok but the values of tab inside the listerner called as ‘angle’ can be changed with edit filed and slider, which has limit obj.angle = [-10,10].
During static code given below also I have made the condtion ‘if (obj.angle ~= obj.angle) & obj.angle<10 & obj.angle>-10’. Hence the static code is not executed if the slider and editfield is out of range of given limit.
But the problem is if I go inside the angle.Value and write some number not in range or limit of angle then even though the static code is not excuted but the value inside angle.Value is updated and stored. I dont want this.
Hence, I request to please suggest some way by which I can prevent the value from being updated manually inside the angle.Value. And the angle.Value should have the same limit as it has when input is given through slider or edit field.
I request to please help.
Thanking all,
Kind regards
classdef classErowin < handle
properties
fig
serial_port
UI
angle
end
properties (SetObservable, AbortSet) % for listeners
angle
end
methods
function obj = class() % constructor
obj.angle = [-10,10]
edit_filed
slider
end
end
methods (Static)
function handlePropEvents(src,evnt)
obj = evnt.AffectedObject;
switch src.Name
case {‘angle’}
if (obj.angle ~= obj.angle) & obj.angle<10 & obj.angle>-10
end
end
end
end
end Dear firends,
I have a querry reagrding the GUI formation in MATLAB. I have made a GUI and everything is ok but the values of tab inside the listerner called as ‘angle’ can be changed with edit filed and slider, which has limit obj.angle = [-10,10].
During static code given below also I have made the condtion ‘if (obj.angle ~= obj.angle) & obj.angle<10 & obj.angle>-10’. Hence the static code is not executed if the slider and editfield is out of range of given limit.
But the problem is if I go inside the angle.Value and write some number not in range or limit of angle then even though the static code is not excuted but the value inside angle.Value is updated and stored. I dont want this.
Hence, I request to please suggest some way by which I can prevent the value from being updated manually inside the angle.Value. And the angle.Value should have the same limit as it has when input is given through slider or edit field.
I request to please help.
Thanking all,
Kind regards
classdef classErowin < handle
properties
fig
serial_port
UI
angle
end
properties (SetObservable, AbortSet) % for listeners
angle
end
methods
function obj = class() % constructor
obj.angle = [-10,10]
edit_filed
slider
end
end
methods (Static)
function handlePropEvents(src,evnt)
obj = evnt.AffectedObject;
switch src.Name
case {‘angle’}
if (obj.angle ~= obj.angle) & obj.angle<10 & obj.angle>-10
end
end
end
end
end gui, matlab gui MATLAB Answers — New Questions
Call a Python function inside a MATLAB loop
In a Matlab script, is there a way to call a Python function, in loop for, in such a way that at every iteration the inputs of the Python function are different?
This is my case, where the arrays "a" and "b" are always different, and they return, obviously, different output:
% My Python function
import numpy as np
from scipy import stats
a = [7, 42, 61, 81, 115, 137, 80, 100, 121, 140, 127, 110, 81, 39, 59, 45, 38, 32, 29, 27, 35, 25, 22, 20, 19, 14, 12, 9, 8, 6, 3, 2, 2, 0, 0, 1, 0, 1, 0, 0];
b = a;
rng = np.random.default_rng()
method = stats.PermutationMethod(n_resamples=9999, random_state=rng)
res = stats.anderson_ksamp([a,b], method=method)
print(res.statistic)
print(res.critical_values)
print(res.pvalue)
To add more details, I would like to have something like this in Matlab:
% Call a Python function inside a MATLAB loop
for i = 1 : 10
a = randi([1 100],1,50);
b = randi([1 100],1,50);
out = call_python_function_here;
endIn a Matlab script, is there a way to call a Python function, in loop for, in such a way that at every iteration the inputs of the Python function are different?
This is my case, where the arrays "a" and "b" are always different, and they return, obviously, different output:
% My Python function
import numpy as np
from scipy import stats
a = [7, 42, 61, 81, 115, 137, 80, 100, 121, 140, 127, 110, 81, 39, 59, 45, 38, 32, 29, 27, 35, 25, 22, 20, 19, 14, 12, 9, 8, 6, 3, 2, 2, 0, 0, 1, 0, 1, 0, 0];
b = a;
rng = np.random.default_rng()
method = stats.PermutationMethod(n_resamples=9999, random_state=rng)
res = stats.anderson_ksamp([a,b], method=method)
print(res.statistic)
print(res.critical_values)
print(res.pvalue)
To add more details, I would like to have something like this in Matlab:
% Call a Python function inside a MATLAB loop
for i = 1 : 10
a = randi([1 100],1,50);
b = randi([1 100],1,50);
out = call_python_function_here;
end In a Matlab script, is there a way to call a Python function, in loop for, in such a way that at every iteration the inputs of the Python function are different?
This is my case, where the arrays "a" and "b" are always different, and they return, obviously, different output:
% My Python function
import numpy as np
from scipy import stats
a = [7, 42, 61, 81, 115, 137, 80, 100, 121, 140, 127, 110, 81, 39, 59, 45, 38, 32, 29, 27, 35, 25, 22, 20, 19, 14, 12, 9, 8, 6, 3, 2, 2, 0, 0, 1, 0, 1, 0, 0];
b = a;
rng = np.random.default_rng()
method = stats.PermutationMethod(n_resamples=9999, random_state=rng)
res = stats.anderson_ksamp([a,b], method=method)
print(res.statistic)
print(res.critical_values)
print(res.pvalue)
To add more details, I would like to have something like this in Matlab:
% Call a Python function inside a MATLAB loop
for i = 1 : 10
a = randi([1 100],1,50);
b = randi([1 100],1,50);
out = call_python_function_here;
end call python MATLAB Answers — New Questions
Display row of unique values based on data validation list?
I have 2 columns (Picture provided) Column A being the starting Week Of (Date of every monday). Column B being the dates of the 7 days that account for the Week of column. So 7 duplicated date values in column A, and Column B with 7 unique date values.
Within that I have a reference for data validation of all ‘Week Of’ Dates. The goal is to have a transposed (column by column) unique identifier that whenever I select a new Week Of Date from my list, it will display the 7 days from say Column F to Column L.
So far I’ve used the TRANSPOSE & UNIQUE functions to just get a foundation of what I want to create but am not sure how to utilize a VLOOKUP or IF statement to bring this formula together.
I have 2 columns (Picture provided) Column A being the starting Week Of (Date of every monday). Column B being the dates of the 7 days that account for the Week of column. So 7 duplicated date values in column A, and Column B with 7 unique date values. Within that I have a reference for data validation of all ‘Week Of’ Dates. The goal is to have a transposed (column by column) unique identifier that whenever I select a new Week Of Date from my list, it will display the 7 days from say Column F to Column L. So far I’ve used the TRANSPOSE & UNIQUE functions to just get a foundation of what I want to create but am not sure how to utilize a VLOOKUP or IF statement to bring this formula together. Read More
Limitation 6000 Error – Help!
Operation on target Get_Failed_Tables failed: There are substantial concurrent copy activity executions which is causing failures due to throttling under subscription XXXX, region eu and limitation 6000. Please reduce the concurrent executions. For limits, refer https://aka dot .ms/adflimits.. ErrorCode: UserErrorWithLargeConcurrentRuns.
Our production data loads have been down for 6+ days. We have been unable to get around this error.
Azure Integration Runtimes have been working for a couple years now. This abruptly stopped working.
Support has not offered the root cause.
ADF support recommends using Self-hosted Integration Runtimes.
Support just reads from the canned solution from the internet and has been unresponsive today even when asking to escalate to Microsoft Global Critical Situation Management.
Changing from Azure to Self-hosted Integration runtimes does not work since Data Flows are not compatible.
Thanks in advance for any incite you can provide!
Operation on target Get_Failed_Tables failed: There are substantial concurrent copy activity executions which is causing failures due to throttling under subscription XXXX, region eu and limitation 6000. Please reduce the concurrent executions. For limits, refer https://aka dot .ms/adflimits.. ErrorCode: UserErrorWithLargeConcurrentRuns. Our production data loads have been down for 6+ days. We have been unable to get around this error.Azure Integration Runtimes have been working for a couple years now. This abruptly stopped working.Support has not offered the root cause. ADF support recommends using Self-hosted Integration Runtimes. Support just reads from the canned solution from the internet and has been unresponsive today even when asking to escalate to Microsoft Global Critical Situation Management.Changing from Azure to Self-hosted Integration runtimes does not work since Data Flows are not compatible.Thanks in advance for any incite you can provide! Read More
Conditional formatting 2 sum cells
Hello,
I have a spreadsheet that tracks clinical supervision hours. One column sums individual supervision (F55) and the other group supervision (G55). There is a limit with group supervision hours (100 hours), so I have conditional formatting turn red when the cell hits 100 hours and I use the MIN formula so it stops tracking. The total supervision requirement is 200 hours and conditional formatting for the individual supervision column (F55) is set to turn green when it hits 200 hours. I’d like the individual supervision column (F55) to turn green once the combination of the individual (F55) and group columns (G55) totals 200. Right now the individual column turns green only when it hits 200 hours, which doesn’t make any sense if you’re doing a combination of individual and group supervision.
Hope that was understood. Thanks for your assistance!
Hello, I have a spreadsheet that tracks clinical supervision hours. One column sums individual supervision (F55) and the other group supervision (G55). There is a limit with group supervision hours (100 hours), so I have conditional formatting turn red when the cell hits 100 hours and I use the MIN formula so it stops tracking. The total supervision requirement is 200 hours and conditional formatting for the individual supervision column (F55) is set to turn green when it hits 200 hours. I’d like the individual supervision column (F55) to turn green once the combination of the individual (F55) and group columns (G55) totals 200. Right now the individual column turns green only when it hits 200 hours, which doesn’t make any sense if you’re doing a combination of individual and group supervision. Hope that was understood. Thanks for your assistance! Read More
Selecting the three dates after specific date
Hi
I want to select the first three dates after the specific date. Is there formula do that? see the attached file
Thx.
HiI want to select the first three dates after the specific date. Is there formula do that? see the attached fileThx. Read More
A Comprehensive Guide for Landing zone for Red Hat Enterprise Linux(RHEL) on Azure
The Essence of the Landing zone for RHEL on Azure: The landing zone for RHEL on Azure is combination of set of guidelines and it’s a blueprint for success in the cloud. It encompasses a range of critical considerations, from identity and access management to network topology, security, and compliance. This document lays out a path for organizations to follow, ensuring that their RHEL systems are deployed with resiliency and aligned with enterprise-scale design principles.
Reference Architecture
The following diagram shows the Landing zone for RHEL on Azure architecture.
The below design areas provide design recommendations and consideration for Landing zone for RHEL on Azure to accelerate your journey.
Management Group and Subscription Organization
Identity and access management
Network topology and connectivity
Business continuity and disaster recovery
Governance and compliance
Security
Management and monitoring
Platform automation & DevOps
Overview
It provides design recommendations and reference architecture, allowing organizations make critical design decisions quickly and scalably.
The document emphasizes the importance of a Standard Operating Environment (SOE) and the advantages of implementing the Red Hat Infrastructure Standard.
It delves into the intricacies of identity and access management, offering insights into the integration of Red Hat Enterprise Linux with Microsoft Active Directory and Microsoft Entra ID.
Identity and Access Management
Red Hat Identity Management (IdM) integrates with Microsoft Active Directory and Microsoft Entra ID, providing a centralized Linux identity authority that increases operational efficiency and access control visibility.
The document recommends automating the deployment, configuration, and day-2 of Red Hat Identity Management using the redhat.rhel_idm certified Ansible collection.
Network Topology and Connectivity
The Landing zone for RHEL on Azure emphasizes the importance of a well-designed network topology to support the deployment of RHEL systems in Azure and methods for a zero-trust network model and deeper micro-segmentation for enhanced security
Deployment, Management, and Patching
Deployment of RHEL instances within Azure is performed using a system image prepared for Azure, with options available through the Azure Marketplace or Red Hat Cloud Access.
Infrastructure as a code please utilize Azure Verified Modules enable and accelerate consistent solution development and delivery of cloud-native or migrated applications and their supporting infrastructure by codifying Microsoft guidance (WAF), with best practice configurations.
Red Hat Satellite and Red Hat Satellite Capsule are recommended for automating the software lifecycle and delivering software to systems wherever they are deployed.
Business Continuity & Disaster Recovery (BCDR):
The document outlines the use of Azure on-demand capacity reservation to ensure sufficient availability for RHEL deployments in Azure regions.
It discusses the importance of geographical deployment considerations for IdM infrastructure to reduce latencies and ensure no single point of failure in replication.
These examples demonstrate the comprehensive approach taken in the document to cover various critical design areas for deploying RHEL on Azure.
A scalable and repeatable approach
One of the standout features of the Landing zone for RHEL on Azure is built on learnings and best practices including architecture. Organizations can adapt the landing zone solution to fit their specific needs, putting them on a path to sustainable scalability and automation. The document provides guidelines for creating a landing zone solution that is both robust and flexible, capable of evolving alongside the organization’s requirements.
Conclusion: The landing zone for RHEL on Azure documentation is a testament to the collaborative effort of industry leaders to provide a structured and secure approach to cloud deployment. It is a resource that empowers organizations to harness the full potential of RHEL on Azure, paving the way for a future where cloud infrastructure is synonymous with innovation and excellence. We encourage you to check out the published document and explore how it can benefit your organization today!
Microsoft Tech Community – Latest Blogs –Read More
Logic Apps Standard – Service Bus In-App connector improvements for Peek-lock operations
In collaboration with Divya Swarnkar and Aprana Seth.
Service Bus In-App connector is bringing new triggers and actions for peek-lock operations. Those changes will allow peek-lock operations in message and queues that don’t require session to be started and completed from any instance of the runtime available in the pool of resources, removing previous requirements for VNET integration and fixed size or role instances, which were needed because of the underlying client SDK used by the connector.
The new trigger and actions will be the default operations for peek-lock, but will not impact existing workflows. Read through the next sections to learn more about this update and its impact.
New triggers
Starting from bundle version 1.81.x, you will find new triggers for messages available in a queue and topic using the peek-lock method:
New Actions
Starting from bundle version 1.81.x, you will find new actions for managing messages in a queue or topic subscriptions using the peek-lock method are added for queue and topic.
What is the difference between this version and the previous version of the connector
The new connector actions require details of the repository holding the message (queue name / topic and subscription name) as well as lock token, where the previous item required the message id.
This allows the connector to reuse or initialize a client in any instance of the runtime available in the pool of resources. With that, not only the pre-requisites of VNET integration and fixed number of role instance is remov but also the requirement of the same Message Receiver that peeked the message being the workflow that execute all the actions is removed. For more information about the previous connector requirements, check this Tech community post.
What is the impact of existing workflows that used the previous version of the Service Bus actions?
The previous actions and triggers are marked as internal actions now. This is how Logic Apps indicates that the actions define in existing workflows are still supported by the runtime, both at design and workflow execution, but shouldn’t not be used for new workflows.
The impact for you as a developer is:
Workflows with old version of the trigger and actions will show normally in the designer and be fully supported by the runtime. This means that if you have existing workflows you will not need to change them.
The runtime do not support the new and old version of the actions in the same workflow. You can have workflows that uses each version independently, but you can’t mix and match version in the same workflow.
This means that if you need to add Service Bus actions in a workflow that already have actions from the previous versions of the connector, all actions must be changed to the new workflow. Notice that all properties from the old version exists in the new one, so you can simply replace the individual actions, providing the required parameters.
What happens with my workflow require session support?
If your workflow requires session, you will be using the existing trigger and actions that are specific for session. Those actions are the same from the previous version, as the underlying SDK doesn’t provide the support to execute action against a message in a repository that is session enabled from any client instance.
That means that the VNET integration requirement, which existed for session in the previous connector, is still required. The requirement for fixed number of role instances have been removed in a previous update, when the connector received the concurrency support. You can read more about the Service Bus connector support for sessions here.
What happen if I am using the Export Tool to migrate my ISE Logic Apps?
As customers are still running their last effort to migrate Logic Apps from ISE to Logic Apps Standard, with many migration processes underway, we decided to keep the previous version of the Service Bus connector as the migrated connector. The reason for that decision was that lots of customers are still actively migrating their ISE logic app fleet, with some workflows already migrated, others still being migrated. Having two different connectors coming from the same export process would confuse customers and complicate their support during runtime.
After the ISE Retirement is completed, we will update the export tool to support the latest version of the connector.
Microsoft Tech Community – Latest Blogs –Read More
Does MATLAB run on Windows XP?
Does MATLAB run on Windows XP?Does MATLAB run on Windows XP? Does MATLAB run on Windows XP? MATLAB Answers — New Questions
Jacobian calculation of symbolic variables which are function of other variables.
Hi everyone,
I am trying to find the jacobian for a transformation matrix. I am using symbolic variables (T, l, m, n). Each of these variables are function of 4 others variables (delta1 delta2 delta3 and delta4), as:
syms u v w p q r phi theta psi x y z;
syms delta1 delta2 delta3 delta4;
% Aerodynamics
V = sqrt(u^2 + v^2 + w^2);
q_bar = (1/2) * rho * V^2;
m = (-1.35* Kf * delta1^2) + (1.35* Kf * delta2^2) + (1.35*K * Kf * delta3^2) + (-1.35* Kf * delta4^2);
l = (0.904* Kf *delta1^2) + (-0.904* Kf *delta2^2) + (0.904* Kf *delta3^2) + (-0.904* Kf *delta4^2);
n = (Km * delta1^2) + (Km * delta2^2) – (Km * delta3^2) – (Km * delta4^2);
T1= (Kf * delta1^2);
T2= (Kf * delta2^2);
T3= (Kf * delta3^2);
T4= (Kf * delta4^2);
T= T1 + T2 + T3 + T4;
phi_dot = p + tan(theta) * (q * sin(phi) + r * cos(phi));
theta_dot = q * cos(phi) – r * sin(phi);
psi_dot = (q * sin(phi) + r * cos(phi)) / cos(theta);
x_dot = cos(psi)*cos(theta)*u + (cos(psi)*sin(theta)*sin(phi) – sin(psi)*cos(phi))*v + (cos(psi)*sin(theta)*cos(phi) + sin(psi)*sin(phi))*w;
y_dot = (sin(psi)*cos(theta))*u + (sin(psi)*sin(theta)*sin(phi) + cos(psi)*cos(phi))*v + (sin(psi)*sin(theta)*cos(phi) – cos(psi)*sin(phi))*w;
z_dot = -sin(theta)*u + cos(theta)*sin(phi)*v + cos(theta)*cos(phi)*w;
f_x = – mass*g * sin(theta);
f_y = mass*g * sin(phi) * cos(theta);
f_z = mass*g * cos(phi) * cos(theta) – T ;
u_dot = r*v – q*w + (1/mass) * (f_x);
v_dot = p*w – r*u + (1/mass) * (f_y);
w_dot = q*u – p*v + (1/mass) * (f_z);
p_dot = gam(1)*p*q – gam(2)*q*r + gam(3)*l + gam(4)*n;
q_dot = gam(5)*p*r – gam(6)*(p^2 – r^2) + (1/J_yy) * m;
r_dot = gam(7)*p*q – gam(1)*q*r + gam(4)*l + gam(8)*n;
% Collect dynamics
f = [ x_dot;
y_dot;
z_dot;
phi_dot;
theta_dot;
psi_dot;
u_dot;
v_dot;
w_dot;
p_dot;
q_dot;
r_dot];
jacobian(f,[T l m n]);
So when calculating jacobian(f,[T l m n]) , i have the error:
"Invalid argument at position 2. Argument must be a variable, a symfun without a formula, or a symfun whose formula is a variable."
Can someone please give me a solution to the problem ?Hi everyone,
I am trying to find the jacobian for a transformation matrix. I am using symbolic variables (T, l, m, n). Each of these variables are function of 4 others variables (delta1 delta2 delta3 and delta4), as:
syms u v w p q r phi theta psi x y z;
syms delta1 delta2 delta3 delta4;
% Aerodynamics
V = sqrt(u^2 + v^2 + w^2);
q_bar = (1/2) * rho * V^2;
m = (-1.35* Kf * delta1^2) + (1.35* Kf * delta2^2) + (1.35*K * Kf * delta3^2) + (-1.35* Kf * delta4^2);
l = (0.904* Kf *delta1^2) + (-0.904* Kf *delta2^2) + (0.904* Kf *delta3^2) + (-0.904* Kf *delta4^2);
n = (Km * delta1^2) + (Km * delta2^2) – (Km * delta3^2) – (Km * delta4^2);
T1= (Kf * delta1^2);
T2= (Kf * delta2^2);
T3= (Kf * delta3^2);
T4= (Kf * delta4^2);
T= T1 + T2 + T3 + T4;
phi_dot = p + tan(theta) * (q * sin(phi) + r * cos(phi));
theta_dot = q * cos(phi) – r * sin(phi);
psi_dot = (q * sin(phi) + r * cos(phi)) / cos(theta);
x_dot = cos(psi)*cos(theta)*u + (cos(psi)*sin(theta)*sin(phi) – sin(psi)*cos(phi))*v + (cos(psi)*sin(theta)*cos(phi) + sin(psi)*sin(phi))*w;
y_dot = (sin(psi)*cos(theta))*u + (sin(psi)*sin(theta)*sin(phi) + cos(psi)*cos(phi))*v + (sin(psi)*sin(theta)*cos(phi) – cos(psi)*sin(phi))*w;
z_dot = -sin(theta)*u + cos(theta)*sin(phi)*v + cos(theta)*cos(phi)*w;
f_x = – mass*g * sin(theta);
f_y = mass*g * sin(phi) * cos(theta);
f_z = mass*g * cos(phi) * cos(theta) – T ;
u_dot = r*v – q*w + (1/mass) * (f_x);
v_dot = p*w – r*u + (1/mass) * (f_y);
w_dot = q*u – p*v + (1/mass) * (f_z);
p_dot = gam(1)*p*q – gam(2)*q*r + gam(3)*l + gam(4)*n;
q_dot = gam(5)*p*r – gam(6)*(p^2 – r^2) + (1/J_yy) * m;
r_dot = gam(7)*p*q – gam(1)*q*r + gam(4)*l + gam(8)*n;
% Collect dynamics
f = [ x_dot;
y_dot;
z_dot;
phi_dot;
theta_dot;
psi_dot;
u_dot;
v_dot;
w_dot;
p_dot;
q_dot;
r_dot];
jacobian(f,[T l m n]);
So when calculating jacobian(f,[T l m n]) , i have the error:
"Invalid argument at position 2. Argument must be a variable, a symfun without a formula, or a symfun whose formula is a variable."
Can someone please give me a solution to the problem ? Hi everyone,
I am trying to find the jacobian for a transformation matrix. I am using symbolic variables (T, l, m, n). Each of these variables are function of 4 others variables (delta1 delta2 delta3 and delta4), as:
syms u v w p q r phi theta psi x y z;
syms delta1 delta2 delta3 delta4;
% Aerodynamics
V = sqrt(u^2 + v^2 + w^2);
q_bar = (1/2) * rho * V^2;
m = (-1.35* Kf * delta1^2) + (1.35* Kf * delta2^2) + (1.35*K * Kf * delta3^2) + (-1.35* Kf * delta4^2);
l = (0.904* Kf *delta1^2) + (-0.904* Kf *delta2^2) + (0.904* Kf *delta3^2) + (-0.904* Kf *delta4^2);
n = (Km * delta1^2) + (Km * delta2^2) – (Km * delta3^2) – (Km * delta4^2);
T1= (Kf * delta1^2);
T2= (Kf * delta2^2);
T3= (Kf * delta3^2);
T4= (Kf * delta4^2);
T= T1 + T2 + T3 + T4;
phi_dot = p + tan(theta) * (q * sin(phi) + r * cos(phi));
theta_dot = q * cos(phi) – r * sin(phi);
psi_dot = (q * sin(phi) + r * cos(phi)) / cos(theta);
x_dot = cos(psi)*cos(theta)*u + (cos(psi)*sin(theta)*sin(phi) – sin(psi)*cos(phi))*v + (cos(psi)*sin(theta)*cos(phi) + sin(psi)*sin(phi))*w;
y_dot = (sin(psi)*cos(theta))*u + (sin(psi)*sin(theta)*sin(phi) + cos(psi)*cos(phi))*v + (sin(psi)*sin(theta)*cos(phi) – cos(psi)*sin(phi))*w;
z_dot = -sin(theta)*u + cos(theta)*sin(phi)*v + cos(theta)*cos(phi)*w;
f_x = – mass*g * sin(theta);
f_y = mass*g * sin(phi) * cos(theta);
f_z = mass*g * cos(phi) * cos(theta) – T ;
u_dot = r*v – q*w + (1/mass) * (f_x);
v_dot = p*w – r*u + (1/mass) * (f_y);
w_dot = q*u – p*v + (1/mass) * (f_z);
p_dot = gam(1)*p*q – gam(2)*q*r + gam(3)*l + gam(4)*n;
q_dot = gam(5)*p*r – gam(6)*(p^2 – r^2) + (1/J_yy) * m;
r_dot = gam(7)*p*q – gam(1)*q*r + gam(4)*l + gam(8)*n;
% Collect dynamics
f = [ x_dot;
y_dot;
z_dot;
phi_dot;
theta_dot;
psi_dot;
u_dot;
v_dot;
w_dot;
p_dot;
q_dot;
r_dot];
jacobian(f,[T l m n]);
So when calculating jacobian(f,[T l m n]) , i have the error:
"Invalid argument at position 2. Argument must be a variable, a symfun without a formula, or a symfun whose formula is a variable."
Can someone please give me a solution to the problem ? jacobian MATLAB Answers — New Questions
multiplying a function handle by a constant
For the code below I am trying to find the polynomial equation that represents the system. There are 4-2nd ODE equations I have made them into 4-1st ODE the first order differentials become "y1,y2,y3and y4" and the 2nd order differentials become a first order. I then put all 4 into a matlabFunction, how do I multiple the function handle by the constant"h" with out getting the error "Operator ‘*’ is not supported for operands of type ‘function_handle’."?
The rest of the code works if working with a single differential equation fx(x,y,t) with only "x,y,t" but in my equations I have "Xf,Xr,Xb,theta" and "Vf,Vr,Vb,omega" that I have chosen to represent as "x1,x2,x3,x4" and "y1,y2,y3,y4" respectively. The next question is will I run into problems here as well?
I am not that expirenced working with matlabFunction command to know how to get this to work. The project requires me to use 2 different numerical methods to find the polynomial equation that best fits so I cannot use "Polyfit" to get the polynomial that best fits. Any suggestions that can help me to get this to work would be appreciated.
clear,close,clc
%______________________________________________________________SOLUITION_2_Heun’s_Method_(for second order differential equations)_&__Least-Square_Nethod____________________________________________________________%
%4 Equations representing the system working with
% MfXf"=Ksf([Xb-(L1*theta)]-Xf)+Bsf([Xb’-(L1*theta’)-Xf’)-(Kf*Xf);
% MrXr"=Ksr([Xb+(L2*theta)]-Xr)+Bsr([Xb’+(L2*theta’)]-Xr’)-(Kr*Xr) ;
% MbXb"=Ksf([Xb-(L1*theta)]-Xf)+Bsf([Xb’-(L1*theta’)]-Xf’)+Ksr([Xb+(L2*theta)]-Xr)+Bsr([Xb’+(L2*theta’)]-Xr’)+fa(t);
% Ic*theta"={-[Ksf(Xf-(L1*theta))*L1]-[Bsf(Xf’-(L1*theta’))*L1]+[Ksr(Xr+(L2*theta))*L2]+[Bsr(Xr’+(L2*theta’))*L2]+[fa(t)*L3]};
clc
clear
%————————————————-SYSTEM_PARAMETERS———————————————————————————————————————————————————-
Ic=1356; %kg-m^2
Mb=730; %kg
Mf=59; %kg
Mr=45; %kg
Kf=23000; %N/m
Ksf=18750; %N/m
Kr=16182; %N/m
Ksr=12574; %N/m
Bsf=100; %N*s/m
Bsr=100; %N*s/m
L1=1.45; %m
L2=1.39; %m
L3=0.67; %m
t=[0:20]; % time from 0 to 20 seconds
n=4; %order of the polynomial
%Initial Conditions-system at rest therefore x(0)=0 dXf/dt(0)=0 dXr/dt(0)=0 dXb/dt(0)=0 dtheta/dt(0)=0 ;
%time from 0 to 20 h=dx=5;
x0=0; %x at initial condition
y0=0; %y at initial condition
t0=0; %t at the start
dx=5; %delta(x) or h
h=dx;
tm=20; %what value of (x) you are ending at
Xf = sym(‘x1’); %x1=Xf
Xr = sym(‘x2’); %x2=Xr
Xb = sym(‘x3’); %x3=Xb
theta = sym(‘x4’); %x4=theta
Vf = sym(‘y1′); %y1=Xf’ = Vf = dXf/dt = Xf_1
Vr = sym(‘y2′); %y2=Xr’ = Vr = dXr/dt = Xr_1
Vb = sym(‘y3′); %y3=Xb’ = Vb = dXb/dt = Xb_1
omega = sym(‘y4’); %y4=theta’= omega = dtheta/dt = theta_1
t = sym(‘t’);
% MfXf"=Ksf([Xb-(L1*theta)]-Xf)+Bsf([Xb’-(L1*theta’)-Xf’)-(Kf*Xf);
% Vf’=(1/Mf)(Ksf([Xb-(L1*theta)]-Xf)+Bsf([Xb’-(L1*theta’)-Xf’)-(Kf*Xf));
% MrXr"=Ksr([Xb+(L2*theta)]-Xr)+Bsr([Xb’+(L2*theta’)]-Xr’)-(Kr*Xr) ;
% Vr’=(1/Mr)(Ksr([Xb+(L2*theta)]-Xr)+Bsr([Xb’+(L2*theta’)]-Xr’)-(Kr*Xr)) ;
% MbXb"=Ksf([Xb-(L1*theta)]-Xf)+Bsf([Xb’-(L1*theta’)]-Xf’)+Ksr([Xb+(L2*theta)]-Xr)+Bsr([Xb’+(L2*theta’)]-Xr’)+fa(t);
% Vb’=(1/Mb)(-Ksf([Xb-(L1*theta)]-Xf)-Bsf([Xb’-(L1*theta’)]-Xf’)-Ksr([Xb+(L2*theta)]-Xr)-Bsr([Xb’+(L2*theta’)]-Xr’)+fa(t));
% Ic*theta"={-[Ksf(Xf-(L1*theta))*L1]-[Bsf(Xf’-(L1*theta’))*L1]+[Ksr(Xr+(L2*theta))*L2]+[Bsr(Xr’+(L2*theta’))*L2]+[fa(t)*L3]};
% omega’=(1/Ic){[-Ksf*Xf + Ksf((L1)^2)*theta)] + [-Bsf*Vf*L1 + Bsf*Vf*L1 + Bsf((L1)^2)*omega)] + [Ksr*Xr*L2 + Ksr((L2)^2)*theta)] + [Bsr*Vr*L2 + Bsr((L2)^2)*omega)*L2] + [fa(t)*L3]};
Xf_2=(Ksf*((Xb-(L1*theta))-Xf)+Bsf*((Vb-(L1*omega)-Vf)-(Kf*Xf)))/Mf;
Xr_2=(Ksr*((Xb+(L2*theta))-Xr)+Bsr*((Vb+(L2*omega))-Vr)-(Kr*Xr))/Mr;
Xb_2=(Ksf*((Xb-(L1*theta))-Xf)+Bsf*((Vb-(L1*theta’))-Vf)+Ksr*((Xb+(L2*theta))-Xr)+Bsr*((Vb+(L2*omega))-Vr)+(10*exp(-(5*t))))/Mb;
theta_2=((-(Ksf*(Xf-(L1^2*theta))*L1)-(Bsf*(Vf-(L1*omega))*L1)+(Ksr*(Xr+(L2*theta))*L2)+(Bsr*(Vr+(L2*omega))*L2)+((10*exp(-(5*t)))*L3)))/Ic;
Eqns=[Xf_2; Xr_2; Xb_2; theta_2];
F1=matlabFunction(Eqns,’Vars’,{‘x1′,’x2′,’x3′,’x4′,’y1′,’y2′,’y3′,’y4′,’t’})
%==INPUT SECTION for Euler’s and Heun’s==%
fx=@(x,y,t)y;
fy=@(x,y,t)F1;
%==CALCULATIONS SECTION==%
tn=t0:h:tm;
xn(1) = x0;
yn(1) = y0;
for i=1:length(tn)
%==EULER’S METHOD
xn(i+1)=xn(i)+fx(xn(i),yn(i),tn(i))*h;
yn(i+1)=yn(i)+fy(xn(i),yn(i),tn(i))*h;
%==NEXT 3 LINES ARE FOR HEUN’S METHOD
tn(i+1)=tn(i)+h;
xn(i+1)=xn(i)+0.5*(fx(xn(i),yn(i),tn(i))+fx(xn(i+1),yn(i+1),tn(i+1)))*h;
yn(i+1)=yn(i)+0.5*(fy(xn(i),yn(i),tn(i))+fy(xn(i+1),yn(i+1),tn(i+1)))*h;
fprintf(‘t=%0.2ft x=%0.3ft y=%0.3fn’,tn(i),xn(i),yn(i))
end
%%%LEAST SQUARE METHOD-FINDS POLYNOMIAL FOR GIVEN DATA SET%%%%%
%INPUT SECTION for Least-Square
X=xn;
Y=yn;
%%__CALCULATIONS SECTION__%%
k=length(X); %NUMBER OF AVAILABLE DATA POINTS
m=n+1; %SIZE OF THE COEFFICENT MATRIX
A=zeros(m,m); %COEFFICENT MATRIX
for j=1:m
for i=1:m
A(j,i)=sum(X.^(i+j-2));
end
end
B=zeros(m,1); %FORCING FUNCTION VECTOR
for i=1:m;
B(i)=sum(Y.*X.^(i-1));
end
a1=AB %COEFFICIENTS FOR THE POLYNOMINAL–> y=a0+a1*x+a2*x^2….an*x^n CAN BE REPLACED BY GAUSSIAN ELIMINATION
%%%%%=========GAUSSIAN ELIMINATION TO FIND "a"========%%%%%%
%%%INPUT SECTION
%CALCULATION SECTION
AB=[A B]; %Augumentent matrix
R=size(AB,1); %# OF ROWS IN AB
C=size(AB,2); %# OF COLUMNS IN AB
%%%%FOWARD ELIMINATION SECTION
for J=1:R-1
[M,I]=max(abs(AB(J:R,J))); %M=MAXIMUM VALUE, I=LOCATION OF THE MAXIMUM VALUE IN THE 1ST ROW
temp=AB(J,:);
AB(J,:)=AB(I+(J-1),:);
AB(I+(J-1),:)=temp;
for i=(J+1):R;
if AB(i,J)~=0;
AB(i,:)=AB(i,:)-(AB(i,J)/AB(J,J))*AB(J,:);
end
end
end
%%%%BACKWARDS SUBSTITUTION
a(R)=AB(R,C)/AB(R,R);
for i=R-1:-1:1
a(i)=(AB(i,C)-AB(i,i+1:R)*a(i+1:R)’)/AB(i,i);
end
disp(a)
syms X
P=0;
for i=1:m;
TT=a(i)*X^(i-1); %T=INDIVIDUAL POLYNOMIAL TERMS
P=P+TT;
end
display(P)
%========END OF GAUSSIAN ELIMINATION=======%%%%%%%%For the code below I am trying to find the polynomial equation that represents the system. There are 4-2nd ODE equations I have made them into 4-1st ODE the first order differentials become "y1,y2,y3and y4" and the 2nd order differentials become a first order. I then put all 4 into a matlabFunction, how do I multiple the function handle by the constant"h" with out getting the error "Operator ‘*’ is not supported for operands of type ‘function_handle’."?
The rest of the code works if working with a single differential equation fx(x,y,t) with only "x,y,t" but in my equations I have "Xf,Xr,Xb,theta" and "Vf,Vr,Vb,omega" that I have chosen to represent as "x1,x2,x3,x4" and "y1,y2,y3,y4" respectively. The next question is will I run into problems here as well?
I am not that expirenced working with matlabFunction command to know how to get this to work. The project requires me to use 2 different numerical methods to find the polynomial equation that best fits so I cannot use "Polyfit" to get the polynomial that best fits. Any suggestions that can help me to get this to work would be appreciated.
clear,close,clc
%______________________________________________________________SOLUITION_2_Heun’s_Method_(for second order differential equations)_&__Least-Square_Nethod____________________________________________________________%
%4 Equations representing the system working with
% MfXf"=Ksf([Xb-(L1*theta)]-Xf)+Bsf([Xb’-(L1*theta’)-Xf’)-(Kf*Xf);
% MrXr"=Ksr([Xb+(L2*theta)]-Xr)+Bsr([Xb’+(L2*theta’)]-Xr’)-(Kr*Xr) ;
% MbXb"=Ksf([Xb-(L1*theta)]-Xf)+Bsf([Xb’-(L1*theta’)]-Xf’)+Ksr([Xb+(L2*theta)]-Xr)+Bsr([Xb’+(L2*theta’)]-Xr’)+fa(t);
% Ic*theta"={-[Ksf(Xf-(L1*theta))*L1]-[Bsf(Xf’-(L1*theta’))*L1]+[Ksr(Xr+(L2*theta))*L2]+[Bsr(Xr’+(L2*theta’))*L2]+[fa(t)*L3]};
clc
clear
%————————————————-SYSTEM_PARAMETERS———————————————————————————————————————————————————-
Ic=1356; %kg-m^2
Mb=730; %kg
Mf=59; %kg
Mr=45; %kg
Kf=23000; %N/m
Ksf=18750; %N/m
Kr=16182; %N/m
Ksr=12574; %N/m
Bsf=100; %N*s/m
Bsr=100; %N*s/m
L1=1.45; %m
L2=1.39; %m
L3=0.67; %m
t=[0:20]; % time from 0 to 20 seconds
n=4; %order of the polynomial
%Initial Conditions-system at rest therefore x(0)=0 dXf/dt(0)=0 dXr/dt(0)=0 dXb/dt(0)=0 dtheta/dt(0)=0 ;
%time from 0 to 20 h=dx=5;
x0=0; %x at initial condition
y0=0; %y at initial condition
t0=0; %t at the start
dx=5; %delta(x) or h
h=dx;
tm=20; %what value of (x) you are ending at
Xf = sym(‘x1’); %x1=Xf
Xr = sym(‘x2’); %x2=Xr
Xb = sym(‘x3’); %x3=Xb
theta = sym(‘x4’); %x4=theta
Vf = sym(‘y1′); %y1=Xf’ = Vf = dXf/dt = Xf_1
Vr = sym(‘y2′); %y2=Xr’ = Vr = dXr/dt = Xr_1
Vb = sym(‘y3′); %y3=Xb’ = Vb = dXb/dt = Xb_1
omega = sym(‘y4’); %y4=theta’= omega = dtheta/dt = theta_1
t = sym(‘t’);
% MfXf"=Ksf([Xb-(L1*theta)]-Xf)+Bsf([Xb’-(L1*theta’)-Xf’)-(Kf*Xf);
% Vf’=(1/Mf)(Ksf([Xb-(L1*theta)]-Xf)+Bsf([Xb’-(L1*theta’)-Xf’)-(Kf*Xf));
% MrXr"=Ksr([Xb+(L2*theta)]-Xr)+Bsr([Xb’+(L2*theta’)]-Xr’)-(Kr*Xr) ;
% Vr’=(1/Mr)(Ksr([Xb+(L2*theta)]-Xr)+Bsr([Xb’+(L2*theta’)]-Xr’)-(Kr*Xr)) ;
% MbXb"=Ksf([Xb-(L1*theta)]-Xf)+Bsf([Xb’-(L1*theta’)]-Xf’)+Ksr([Xb+(L2*theta)]-Xr)+Bsr([Xb’+(L2*theta’)]-Xr’)+fa(t);
% Vb’=(1/Mb)(-Ksf([Xb-(L1*theta)]-Xf)-Bsf([Xb’-(L1*theta’)]-Xf’)-Ksr([Xb+(L2*theta)]-Xr)-Bsr([Xb’+(L2*theta’)]-Xr’)+fa(t));
% Ic*theta"={-[Ksf(Xf-(L1*theta))*L1]-[Bsf(Xf’-(L1*theta’))*L1]+[Ksr(Xr+(L2*theta))*L2]+[Bsr(Xr’+(L2*theta’))*L2]+[fa(t)*L3]};
% omega’=(1/Ic){[-Ksf*Xf + Ksf((L1)^2)*theta)] + [-Bsf*Vf*L1 + Bsf*Vf*L1 + Bsf((L1)^2)*omega)] + [Ksr*Xr*L2 + Ksr((L2)^2)*theta)] + [Bsr*Vr*L2 + Bsr((L2)^2)*omega)*L2] + [fa(t)*L3]};
Xf_2=(Ksf*((Xb-(L1*theta))-Xf)+Bsf*((Vb-(L1*omega)-Vf)-(Kf*Xf)))/Mf;
Xr_2=(Ksr*((Xb+(L2*theta))-Xr)+Bsr*((Vb+(L2*omega))-Vr)-(Kr*Xr))/Mr;
Xb_2=(Ksf*((Xb-(L1*theta))-Xf)+Bsf*((Vb-(L1*theta’))-Vf)+Ksr*((Xb+(L2*theta))-Xr)+Bsr*((Vb+(L2*omega))-Vr)+(10*exp(-(5*t))))/Mb;
theta_2=((-(Ksf*(Xf-(L1^2*theta))*L1)-(Bsf*(Vf-(L1*omega))*L1)+(Ksr*(Xr+(L2*theta))*L2)+(Bsr*(Vr+(L2*omega))*L2)+((10*exp(-(5*t)))*L3)))/Ic;
Eqns=[Xf_2; Xr_2; Xb_2; theta_2];
F1=matlabFunction(Eqns,’Vars’,{‘x1′,’x2′,’x3′,’x4′,’y1′,’y2′,’y3′,’y4′,’t’})
%==INPUT SECTION for Euler’s and Heun’s==%
fx=@(x,y,t)y;
fy=@(x,y,t)F1;
%==CALCULATIONS SECTION==%
tn=t0:h:tm;
xn(1) = x0;
yn(1) = y0;
for i=1:length(tn)
%==EULER’S METHOD
xn(i+1)=xn(i)+fx(xn(i),yn(i),tn(i))*h;
yn(i+1)=yn(i)+fy(xn(i),yn(i),tn(i))*h;
%==NEXT 3 LINES ARE FOR HEUN’S METHOD
tn(i+1)=tn(i)+h;
xn(i+1)=xn(i)+0.5*(fx(xn(i),yn(i),tn(i))+fx(xn(i+1),yn(i+1),tn(i+1)))*h;
yn(i+1)=yn(i)+0.5*(fy(xn(i),yn(i),tn(i))+fy(xn(i+1),yn(i+1),tn(i+1)))*h;
fprintf(‘t=%0.2ft x=%0.3ft y=%0.3fn’,tn(i),xn(i),yn(i))
end
%%%LEAST SQUARE METHOD-FINDS POLYNOMIAL FOR GIVEN DATA SET%%%%%
%INPUT SECTION for Least-Square
X=xn;
Y=yn;
%%__CALCULATIONS SECTION__%%
k=length(X); %NUMBER OF AVAILABLE DATA POINTS
m=n+1; %SIZE OF THE COEFFICENT MATRIX
A=zeros(m,m); %COEFFICENT MATRIX
for j=1:m
for i=1:m
A(j,i)=sum(X.^(i+j-2));
end
end
B=zeros(m,1); %FORCING FUNCTION VECTOR
for i=1:m;
B(i)=sum(Y.*X.^(i-1));
end
a1=AB %COEFFICIENTS FOR THE POLYNOMINAL–> y=a0+a1*x+a2*x^2….an*x^n CAN BE REPLACED BY GAUSSIAN ELIMINATION
%%%%%=========GAUSSIAN ELIMINATION TO FIND "a"========%%%%%%
%%%INPUT SECTION
%CALCULATION SECTION
AB=[A B]; %Augumentent matrix
R=size(AB,1); %# OF ROWS IN AB
C=size(AB,2); %# OF COLUMNS IN AB
%%%%FOWARD ELIMINATION SECTION
for J=1:R-1
[M,I]=max(abs(AB(J:R,J))); %M=MAXIMUM VALUE, I=LOCATION OF THE MAXIMUM VALUE IN THE 1ST ROW
temp=AB(J,:);
AB(J,:)=AB(I+(J-1),:);
AB(I+(J-1),:)=temp;
for i=(J+1):R;
if AB(i,J)~=0;
AB(i,:)=AB(i,:)-(AB(i,J)/AB(J,J))*AB(J,:);
end
end
end
%%%%BACKWARDS SUBSTITUTION
a(R)=AB(R,C)/AB(R,R);
for i=R-1:-1:1
a(i)=(AB(i,C)-AB(i,i+1:R)*a(i+1:R)’)/AB(i,i);
end
disp(a)
syms X
P=0;
for i=1:m;
TT=a(i)*X^(i-1); %T=INDIVIDUAL POLYNOMIAL TERMS
P=P+TT;
end
display(P)
%========END OF GAUSSIAN ELIMINATION=======%%%%%%%% For the code below I am trying to find the polynomial equation that represents the system. There are 4-2nd ODE equations I have made them into 4-1st ODE the first order differentials become "y1,y2,y3and y4" and the 2nd order differentials become a first order. I then put all 4 into a matlabFunction, how do I multiple the function handle by the constant"h" with out getting the error "Operator ‘*’ is not supported for operands of type ‘function_handle’."?
The rest of the code works if working with a single differential equation fx(x,y,t) with only "x,y,t" but in my equations I have "Xf,Xr,Xb,theta" and "Vf,Vr,Vb,omega" that I have chosen to represent as "x1,x2,x3,x4" and "y1,y2,y3,y4" respectively. The next question is will I run into problems here as well?
I am not that expirenced working with matlabFunction command to know how to get this to work. The project requires me to use 2 different numerical methods to find the polynomial equation that best fits so I cannot use "Polyfit" to get the polynomial that best fits. Any suggestions that can help me to get this to work would be appreciated.
clear,close,clc
%______________________________________________________________SOLUITION_2_Heun’s_Method_(for second order differential equations)_&__Least-Square_Nethod____________________________________________________________%
%4 Equations representing the system working with
% MfXf"=Ksf([Xb-(L1*theta)]-Xf)+Bsf([Xb’-(L1*theta’)-Xf’)-(Kf*Xf);
% MrXr"=Ksr([Xb+(L2*theta)]-Xr)+Bsr([Xb’+(L2*theta’)]-Xr’)-(Kr*Xr) ;
% MbXb"=Ksf([Xb-(L1*theta)]-Xf)+Bsf([Xb’-(L1*theta’)]-Xf’)+Ksr([Xb+(L2*theta)]-Xr)+Bsr([Xb’+(L2*theta’)]-Xr’)+fa(t);
% Ic*theta"={-[Ksf(Xf-(L1*theta))*L1]-[Bsf(Xf’-(L1*theta’))*L1]+[Ksr(Xr+(L2*theta))*L2]+[Bsr(Xr’+(L2*theta’))*L2]+[fa(t)*L3]};
clc
clear
%————————————————-SYSTEM_PARAMETERS———————————————————————————————————————————————————-
Ic=1356; %kg-m^2
Mb=730; %kg
Mf=59; %kg
Mr=45; %kg
Kf=23000; %N/m
Ksf=18750; %N/m
Kr=16182; %N/m
Ksr=12574; %N/m
Bsf=100; %N*s/m
Bsr=100; %N*s/m
L1=1.45; %m
L2=1.39; %m
L3=0.67; %m
t=[0:20]; % time from 0 to 20 seconds
n=4; %order of the polynomial
%Initial Conditions-system at rest therefore x(0)=0 dXf/dt(0)=0 dXr/dt(0)=0 dXb/dt(0)=0 dtheta/dt(0)=0 ;
%time from 0 to 20 h=dx=5;
x0=0; %x at initial condition
y0=0; %y at initial condition
t0=0; %t at the start
dx=5; %delta(x) or h
h=dx;
tm=20; %what value of (x) you are ending at
Xf = sym(‘x1’); %x1=Xf
Xr = sym(‘x2’); %x2=Xr
Xb = sym(‘x3’); %x3=Xb
theta = sym(‘x4’); %x4=theta
Vf = sym(‘y1′); %y1=Xf’ = Vf = dXf/dt = Xf_1
Vr = sym(‘y2′); %y2=Xr’ = Vr = dXr/dt = Xr_1
Vb = sym(‘y3′); %y3=Xb’ = Vb = dXb/dt = Xb_1
omega = sym(‘y4’); %y4=theta’= omega = dtheta/dt = theta_1
t = sym(‘t’);
% MfXf"=Ksf([Xb-(L1*theta)]-Xf)+Bsf([Xb’-(L1*theta’)-Xf’)-(Kf*Xf);
% Vf’=(1/Mf)(Ksf([Xb-(L1*theta)]-Xf)+Bsf([Xb’-(L1*theta’)-Xf’)-(Kf*Xf));
% MrXr"=Ksr([Xb+(L2*theta)]-Xr)+Bsr([Xb’+(L2*theta’)]-Xr’)-(Kr*Xr) ;
% Vr’=(1/Mr)(Ksr([Xb+(L2*theta)]-Xr)+Bsr([Xb’+(L2*theta’)]-Xr’)-(Kr*Xr)) ;
% MbXb"=Ksf([Xb-(L1*theta)]-Xf)+Bsf([Xb’-(L1*theta’)]-Xf’)+Ksr([Xb+(L2*theta)]-Xr)+Bsr([Xb’+(L2*theta’)]-Xr’)+fa(t);
% Vb’=(1/Mb)(-Ksf([Xb-(L1*theta)]-Xf)-Bsf([Xb’-(L1*theta’)]-Xf’)-Ksr([Xb+(L2*theta)]-Xr)-Bsr([Xb’+(L2*theta’)]-Xr’)+fa(t));
% Ic*theta"={-[Ksf(Xf-(L1*theta))*L1]-[Bsf(Xf’-(L1*theta’))*L1]+[Ksr(Xr+(L2*theta))*L2]+[Bsr(Xr’+(L2*theta’))*L2]+[fa(t)*L3]};
% omega’=(1/Ic){[-Ksf*Xf + Ksf((L1)^2)*theta)] + [-Bsf*Vf*L1 + Bsf*Vf*L1 + Bsf((L1)^2)*omega)] + [Ksr*Xr*L2 + Ksr((L2)^2)*theta)] + [Bsr*Vr*L2 + Bsr((L2)^2)*omega)*L2] + [fa(t)*L3]};
Xf_2=(Ksf*((Xb-(L1*theta))-Xf)+Bsf*((Vb-(L1*omega)-Vf)-(Kf*Xf)))/Mf;
Xr_2=(Ksr*((Xb+(L2*theta))-Xr)+Bsr*((Vb+(L2*omega))-Vr)-(Kr*Xr))/Mr;
Xb_2=(Ksf*((Xb-(L1*theta))-Xf)+Bsf*((Vb-(L1*theta’))-Vf)+Ksr*((Xb+(L2*theta))-Xr)+Bsr*((Vb+(L2*omega))-Vr)+(10*exp(-(5*t))))/Mb;
theta_2=((-(Ksf*(Xf-(L1^2*theta))*L1)-(Bsf*(Vf-(L1*omega))*L1)+(Ksr*(Xr+(L2*theta))*L2)+(Bsr*(Vr+(L2*omega))*L2)+((10*exp(-(5*t)))*L3)))/Ic;
Eqns=[Xf_2; Xr_2; Xb_2; theta_2];
F1=matlabFunction(Eqns,’Vars’,{‘x1′,’x2′,’x3′,’x4′,’y1′,’y2′,’y3′,’y4′,’t’})
%==INPUT SECTION for Euler’s and Heun’s==%
fx=@(x,y,t)y;
fy=@(x,y,t)F1;
%==CALCULATIONS SECTION==%
tn=t0:h:tm;
xn(1) = x0;
yn(1) = y0;
for i=1:length(tn)
%==EULER’S METHOD
xn(i+1)=xn(i)+fx(xn(i),yn(i),tn(i))*h;
yn(i+1)=yn(i)+fy(xn(i),yn(i),tn(i))*h;
%==NEXT 3 LINES ARE FOR HEUN’S METHOD
tn(i+1)=tn(i)+h;
xn(i+1)=xn(i)+0.5*(fx(xn(i),yn(i),tn(i))+fx(xn(i+1),yn(i+1),tn(i+1)))*h;
yn(i+1)=yn(i)+0.5*(fy(xn(i),yn(i),tn(i))+fy(xn(i+1),yn(i+1),tn(i+1)))*h;
fprintf(‘t=%0.2ft x=%0.3ft y=%0.3fn’,tn(i),xn(i),yn(i))
end
%%%LEAST SQUARE METHOD-FINDS POLYNOMIAL FOR GIVEN DATA SET%%%%%
%INPUT SECTION for Least-Square
X=xn;
Y=yn;
%%__CALCULATIONS SECTION__%%
k=length(X); %NUMBER OF AVAILABLE DATA POINTS
m=n+1; %SIZE OF THE COEFFICENT MATRIX
A=zeros(m,m); %COEFFICENT MATRIX
for j=1:m
for i=1:m
A(j,i)=sum(X.^(i+j-2));
end
end
B=zeros(m,1); %FORCING FUNCTION VECTOR
for i=1:m;
B(i)=sum(Y.*X.^(i-1));
end
a1=AB %COEFFICIENTS FOR THE POLYNOMINAL–> y=a0+a1*x+a2*x^2….an*x^n CAN BE REPLACED BY GAUSSIAN ELIMINATION
%%%%%=========GAUSSIAN ELIMINATION TO FIND "a"========%%%%%%
%%%INPUT SECTION
%CALCULATION SECTION
AB=[A B]; %Augumentent matrix
R=size(AB,1); %# OF ROWS IN AB
C=size(AB,2); %# OF COLUMNS IN AB
%%%%FOWARD ELIMINATION SECTION
for J=1:R-1
[M,I]=max(abs(AB(J:R,J))); %M=MAXIMUM VALUE, I=LOCATION OF THE MAXIMUM VALUE IN THE 1ST ROW
temp=AB(J,:);
AB(J,:)=AB(I+(J-1),:);
AB(I+(J-1),:)=temp;
for i=(J+1):R;
if AB(i,J)~=0;
AB(i,:)=AB(i,:)-(AB(i,J)/AB(J,J))*AB(J,:);
end
end
end
%%%%BACKWARDS SUBSTITUTION
a(R)=AB(R,C)/AB(R,R);
for i=R-1:-1:1
a(i)=(AB(i,C)-AB(i,i+1:R)*a(i+1:R)’)/AB(i,i);
end
disp(a)
syms X
P=0;
for i=1:m;
TT=a(i)*X^(i-1); %T=INDIVIDUAL POLYNOMIAL TERMS
P=P+TT;
end
display(P)
%========END OF GAUSSIAN ELIMINATION=======%%%%%%%% multiplying function handle, polynomial that best fits, heun’s method MATLAB Answers — New Questions
MDEClientAnalyzer not working on Suse 12
We are having issues with running MDEClientAnalyzer on Suse 12. Suse 12 is officially supported by MDE, thus I assume MDEClientAnalyzer is as well. However when run it according to the MS Instructions
we are receiving error that “could not run command /bin/hostname exception: RAN: /bin/hostname -A”
It looks like on Suse linux command hostname with parameter -A. On RHEL works perfect and shows fqdn when running this command. On Suse it should be with hostname -f however MDEClientAnalyzer is not editable as it is binary. Anyone knows how to fix it ?
We are having issues with running MDEClientAnalyzer on Suse 12. Suse 12 is officially supported by MDE, thus I assume MDEClientAnalyzer is as well. However when run it according to the MS Instructionswe are receiving error that “could not run command /bin/hostname exception: RAN: /bin/hostname -A”It looks like on Suse linux command hostname with parameter -A. On RHEL works perfect and shows fqdn when running this command. On Suse it should be with hostname -f however MDEClientAnalyzer is not editable as it is binary. Anyone knows how to fix it ? Read More
Show or hide the Discover feed in Microsoft Teams
Hi, Microsoft 365 Insiders,
We’re excited to introduce a new enhancement in Microsoft Teams: the ability to show or hide the Discover feed. This personalized, relevance-based feed helps you stay informed and engaged with important content while managing information overload.
Check out our latest blog: Show or hide the Discover feed in Microsoft Teams
Thanks!
Perry Sjogren
Microsoft 365 Insider Community Manager
Become a Microsoft 365 Insider and gain exclusive access to new features and help shape the future of Microsoft 365. Join Now: Windows | Mac | iOS | Android
Hi, Microsoft 365 Insiders,
We’re excited to introduce a new enhancement in Microsoft Teams: the ability to show or hide the Discover feed. This personalized, relevance-based feed helps you stay informed and engaged with important content while managing information overload.
Check out our latest blog: Show or hide the Discover feed in Microsoft Teams
Thanks!
Perry Sjogren
Microsoft 365 Insider Community Manager
Become a Microsoft 365 Insider and gain exclusive access to new features and help shape the future of Microsoft 365. Join Now: Windows | Mac | iOS | Android Read More
Ihor Zahorodnii DataOps for the modern data warehouse
Ihor Zahorodnii
DataOps for the modern data warehouse
This article describes how a fictional city planning office could use this solution. The solution provides an end-to-end data pipeline that follows the MDW architectural pattern, along with corresponding DevOps and DataOps processes, to assess parking use and make more informed business decisions.
Architecture
The following diagram shows the overall architecture of the solution.
Dataflow
Azure Data Factory (ADF) orchestrates and Azure Data Lake Storage (ADLS) Gen2 stores the data:
The Contoso city parking web service API is available to transfer data from the parking spots.
There’s an ADF copy job that transfers the data into the Landing schema.
Next, Azure Databricks cleanses and standardizes the data. It takes the raw data and conditions it so data scientists can use it.
If validation reveals any bad data, it gets dumped into the Malformed schema.
Important
People have asked why the data isn’t validated before it’s stored in ADLS. The reason is that the validation might introduce a bug that could corrupt the dataset. If you introduce a bug at this step, you can fix the bug and replay your pipeline. If you dumped the bad data before you added it to ADLS, then the corrupted data is useless because you can’t replay your pipeline.
There’s a second Azure Databricks transform step that converts the data into a format that you can store in the data warehouse.
Finally, the pipeline serves the data in two different ways:
Databricks makes the data available to the data scientist so they can train models.
Polybase moves the data from the data lake to Azure Synapse Analytics and Power BI accesses the data and presents it to the business user.
Components
The solution uses these components:
Azure Data Lake Storage (ADLS) Gen2
Scenario details
A modern data warehouse (MDW) lets you easily bring all of your data together at any scale. It doesn’t matter if it’s structured, unstructured, or semi-structured data. You can gain insights to an MDW through analytical dashboards, operational reports, or advanced analytics for all your users.
Setting up an MDW environment for both development (dev) and production (prod) environments is complex. Automating the process is key. It helps increase productivity while minimizing the risk of errors.
This article describes how a fictional city planning office could use this solution. The solution provides an end-to-end data pipeline that follows the MDW architectural pattern, along with corresponding DevOps and DataOps processes, to assess parking use and make more informed business decisions.
Solution requirements
Ability to collect data from different sources or systems.
Infrastructure as code: deploy new dev and staging (stg) environments in an automated manner.
Deploy application changes across different environments in an automated manner:
Implement continuous integration and continuous delivery (CI/CD) pipelines.
Use deployment gates for manual approvals.
Pipeline as Code: ensure the CI/CD pipeline definitions are in source control.
Carry out integration tests on changes using a sample data set.
Run pipelines on a scheduled basis.
Support future agile development, including the addition of data science workloads.
Support for both row-level and object-level security:
The security feature is available in SQL Database.
You can also find it in Azure Synapse Analytics, Azure Analysis Services (AAS) and Power BI.
Support for 10 concurrent dashboard users and 20 concurrent power users.
The data pipeline should carry out data validation and filter out malformed records to a specified store.
Support monitoring.
Centralized configuration in a secure storage like Azure Key Vault.
More details here: https://learn.microsoft.com/en-us/azure/architecture/databases/architecture/dataops-mdw
Ihor Zahorodnii
Ihor Zahorodnii
Ihor Zahorodnii DataOps for the modern data warehouse This article describes how a fictional city planning office could use this solution. The solution provides an end-to-end data pipeline that follows the MDW architectural pattern, along with corresponding DevOps and DataOps processes, to assess parking use and make more informed business decisions. ArchitectureThe following diagram shows the overall architecture of the solution. DataflowAzure Data Factory (ADF) orchestrates and Azure Data Lake Storage (ADLS) Gen2 stores the data:The Contoso city parking web service API is available to transfer data from the parking spots.There’s an ADF copy job that transfers the data into the Landing schema.Next, Azure Databricks cleanses and standardizes the data. It takes the raw data and conditions it so data scientists can use it.If validation reveals any bad data, it gets dumped into the Malformed schema. ImportantPeople have asked why the data isn’t validated before it’s stored in ADLS. The reason is that the validation might introduce a bug that could corrupt the dataset. If you introduce a bug at this step, you can fix the bug and replay your pipeline. If you dumped the bad data before you added it to ADLS, then the corrupted data is useless because you can’t replay your pipeline.There’s a second Azure Databricks transform step that converts the data into a format that you can store in the data warehouse.Finally, the pipeline serves the data in two different ways:Databricks makes the data available to the data scientist so they can train models.Polybase moves the data from the data lake to Azure Synapse Analytics and Power BI accesses the data and presents it to the business user. ComponentsThe solution uses these components:Azure Data Factory (ADF)Azure DatabricksAzure Data Lake Storage (ADLS) Gen2Azure Synapse AnalyticsAzure Key VaultAzure DevOpsPower BIScenario detailsA modern data warehouse (MDW) lets you easily bring all of your data together at any scale. It doesn’t matter if it’s structured, unstructured, or semi-structured data. You can gain insights to an MDW through analytical dashboards, operational reports, or advanced analytics for all your users.Setting up an MDW environment for both development (dev) and production (prod) environments is complex. Automating the process is key. It helps increase productivity while minimizing the risk of errors.This article describes how a fictional city planning office could use this solution. The solution provides an end-to-end data pipeline that follows the MDW architectural pattern, along with corresponding DevOps and DataOps processes, to assess parking use and make more informed business decisions.Solution requirementsAbility to collect data from different sources or systems.Infrastructure as code: deploy new dev and staging (stg) environments in an automated manner.Deploy application changes across different environments in an automated manner:Implement continuous integration and continuous delivery (CI/CD) pipelines.Use deployment gates for manual approvals.Pipeline as Code: ensure the CI/CD pipeline definitions are in source control.Carry out integration tests on changes using a sample data set.Run pipelines on a scheduled basis.Support future agile development, including the addition of data science workloads.Support for both row-level and object-level security:The security feature is available in SQL Database.You can also find it in Azure Synapse Analytics, Azure Analysis Services (AAS) and Power BI.Support for 10 concurrent dashboard users and 20 concurrent power users.The data pipeline should carry out data validation and filter out malformed records to a specified store.Support monitoring.Centralized configuration in a secure storage like Azure Key Vault.More details here: https://learn.microsoft.com/en-us/azure/architecture/databases/architecture/dataops-mdw Ihor Zahorodnii Ihor Zahorodnii Read More
New Blog | Leveraging Azure DDoS protection with WAF rate limiting
By Saleem Bseeu
Introduction
In an increasingly interconnected world, the need for robust cybersecurity measures has never been more critical. As businesses and organizations migrate to the cloud, they must address not only the conventional threats but also more sophisticated ones like Distributed Denial of Service (DDoS) attacks. Azure, Microsoft’s cloud computing platform, offers powerful tools to protect your applications and data. In this blog post, we will explore how to leverage Azure DDoS Protection in combination with Azure Web Application Firewall (WAF) rate limiting to enhance your security posture.
Understanding DDoS Attacks
Distributed Denial of Service attacks are a malicious attempt to disrupt the normal functioning of a network, service, or website by overwhelming it with a flood of internet traffic. These attacks can paralyze online services, causing severe downtime and financial losses. Azure DDoS Protection is a service designed to mitigate such attacks and ensure the availability of your applications hosted on Azure.
Combining Azure DDoS Protection with WAF Rate Limiting
While Azure DDoS Protection can mitigate many types of attacks, it’s often beneficial to combine it with a Web Application Firewall for comprehensive security. Azure WAF provides protection at the application layer, inspecting HTTP/HTTPS traffic and identifying and blocking malicious requests. One of the key features of Azure WAF is rate limiting, which allows you to control the number of incoming requests from a single IP address or Geo location. By setting appropriate rate limiting rules, you can mitigate application-layer DDoS attacks.
In this article, we will delve into DDoS protection logs, exploring how to harness this valuable data to configure rate limiting on the Application Gateway WAF. By doing so, we fortify our defenses at various layers, ensuring a holistic approach to DDoS protection.
Read the full post here: Leveraging Azure DDoS protection with WAF rate limiting
By Saleem Bseeu
Introduction
In an increasingly interconnected world, the need for robust cybersecurity measures has never been more critical. As businesses and organizations migrate to the cloud, they must address not only the conventional threats but also more sophisticated ones like Distributed Denial of Service (DDoS) attacks. Azure, Microsoft’s cloud computing platform, offers powerful tools to protect your applications and data. In this blog post, we will explore how to leverage Azure DDoS Protection in combination with Azure Web Application Firewall (WAF) rate limiting to enhance your security posture.
Understanding DDoS Attacks
Distributed Denial of Service attacks are a malicious attempt to disrupt the normal functioning of a network, service, or website by overwhelming it with a flood of internet traffic. These attacks can paralyze online services, causing severe downtime and financial losses. Azure DDoS Protection is a service designed to mitigate such attacks and ensure the availability of your applications hosted on Azure.
Combining Azure DDoS Protection with WAF Rate Limiting
While Azure DDoS Protection can mitigate many types of attacks, it’s often beneficial to combine it with a Web Application Firewall for comprehensive security. Azure WAF provides protection at the application layer, inspecting HTTP/HTTPS traffic and identifying and blocking malicious requests. One of the key features of Azure WAF is rate limiting, which allows you to control the number of incoming requests from a single IP address or Geo location. By setting appropriate rate limiting rules, you can mitigate application-layer DDoS attacks.
In this article, we will delve into DDoS protection logs, exploring how to harness this valuable data to configure rate limiting on the Application Gateway WAF. By doing so, we fortify our defenses at various layers, ensuring a holistic approach to DDoS protection.
Read the full post here: Leveraging Azure DDoS protection with WAF rate limiting Read More
New Blog | Microsoft Power BI and Defender for Cloud – Part 2: Overcoming ARG 1000-Record Limit
In our previous blog, we explored how Power BI can complement Azure Workbook for consuming and visualizing data from Microsoft Defender for Cloud (MDC). In this second installment of our series, we dive into a common limitation faced when working with Azure Resource Graph (ARG) data – the 1000-record limit – and how Power BI can effectively address this constraint to enhance your data analysis and security insights.
The 1000-Record Limit: A Bottleneck in Data Analysis
When querying Azure Resource Graph (ARG) programmatically or using tools like Azure Workbook, users often face a limitation where the results are truncated to 1000 records. This limitation can be problematic for environments with extensive data, such as those with numerous subscriptions or complex resource configurations. Notably, this limit does not apply when accessing data through the Azure Portal’s built-in Azure Resource Graph Explorer, where users can query and view larger datasets without restriction. This difference can create a significant bottleneck for organizations relying on programmatic access to ARG data for comprehensive analysis.
Power BI and ARG Data Connector: Breaking Through the Limit
One of the key advantages of using Power BI’s ARG data connector is its ability to bypass the 1000-record limit imposed by Azure Workbook and other similar tools. By leveraging Power BI’s capabilities, users can access and visualize a comprehensive dataset without the constraints that typically come with ARG queries.
The Power BI ARG data connector provides a robust solution by enabling the extraction of larger datasets, which allows for more detailed and insightful analysis. This feature is particularly useful for organizations with extensive resource configurations and security plans, as it facilitates a deeper understanding of their security posture.
Read the full post here: Microsoft Power BI and Defender for Cloud – Part 2: Overcoming ARG 1000-Record Limit
By Giulio Astori
In our previous blog, we explored how Power BI can complement Azure Workbook for consuming and visualizing data from Microsoft Defender for Cloud (MDC). In this second installment of our series, we dive into a common limitation faced when working with Azure Resource Graph (ARG) data – the 1000-record limit – and how Power BI can effectively address this constraint to enhance your data analysis and security insights.
The 1000-Record Limit: A Bottleneck in Data Analysis
When querying Azure Resource Graph (ARG) programmatically or using tools like Azure Workbook, users often face a limitation where the results are truncated to 1000 records. This limitation can be problematic for environments with extensive data, such as those with numerous subscriptions or complex resource configurations. Notably, this limit does not apply when accessing data through the Azure Portal’s built-in Azure Resource Graph Explorer, where users can query and view larger datasets without restriction. This difference can create a significant bottleneck for organizations relying on programmatic access to ARG data for comprehensive analysis.
Power BI and ARG Data Connector: Breaking Through the Limit
One of the key advantages of using Power BI’s ARG data connector is its ability to bypass the 1000-record limit imposed by Azure Workbook and other similar tools. By leveraging Power BI’s capabilities, users can access and visualize a comprehensive dataset without the constraints that typically come with ARG queries.
The Power BI ARG data connector provides a robust solution by enabling the extraction of larger datasets, which allows for more detailed and insightful analysis. This feature is particularly useful for organizations with extensive resource configurations and security plans, as it facilitates a deeper understanding of their security posture.
Read the full post here: Microsoft Power BI and Defender for Cloud – Part 2: Overcoming ARG 1000-Record Limit Read More
Import old email into Outlook
I am a newbie. I have Microsoft professional Plus 2024 Microsoft 365. I want to move my old emails from my eM client program ( I have five email addresses with a history of five years of emails) into my new Outlook folders. I cannot find out how to do this. I am also trying to import my contacts ( people) information into Outlook.
I am a newbie. I have Microsoft professional Plus 2024 Microsoft 365. I want to move my old emails from my eM client program ( I have five email addresses with a history of five years of emails) into my new Outlook folders. I cannot find out how to do this. I am also trying to import my contacts ( people) information into Outlook. Read More
SQL Server Virtualization and S3 – Authentication Error
We are experimenting with data virtualization in SQL server 2022 where we have data in S3 that we want to access from our SQL Server instances. I have completed the configuration according to the documentation, but I am getting an error when trying to access the external table. SQL Server says it cannot list the contents of the directory. Logs in AWS indicate that it cannot connect due to an authorization error where the header is malformed.
I verified that I can access that bucket with the same credentials using the AWS cli from the same machine, but I cannot figure out why it is failing or what the authorization header looks like. Any pointers on where to look?
Enable Polybase
select serverproperty(‘IsPolyBaseInstalled’) as IsPolyBaseInstalled
exec sp_configure @configname = ‘polybase enabled’, @configvalue = 1
Create Credentials and data source
create master key encryption by password = ‘<some password>’
go
create credential s3_dc with identity = ‘S3 Access Key’, SECRET = ‘<access key>:<secret key>’
go
create external data source s3_ds
with (
location = ‘s3://<bucket_name>/<path>/’,
credential = s3_dc,
connection_options = ‘{
“s3”:{
“url_style”:”virtual_hosted”
}
}’
)
go
Create External Table
CREATE EXTERNAL FILE FORMAT ParquetFileFormat WITH(FORMAT_TYPE = PARQUET)
GO
CREATE EXTERNAL TABLE sample_table(
code varchar,
the_date date,
ref_code varchar,
value1 int,
value2 int,
value3 int,
cost numeric(12,2),
peak_value varchar
)
WITH (
LOCATION = ‘/sample_table/’,
DATA_SOURCE = s3_ds,
FILE_FORMAT = ParquetFileFormat
)
GO
We are experimenting with data virtualization in SQL server 2022 where we have data in S3 that we want to access from our SQL Server instances. I have completed the configuration according to the documentation, but I am getting an error when trying to access the external table. SQL Server says it cannot list the contents of the directory. Logs in AWS indicate that it cannot connect due to an authorization error where the header is malformed. I verified that I can access that bucket with the same credentials using the AWS cli from the same machine, but I cannot figure out why it is failing or what the authorization header looks like. Any pointers on where to look? Enable Polybaseselect serverproperty(‘IsPolyBaseInstalled’) as IsPolyBaseInstalled
exec sp_configure @configname = ‘polybase enabled’, @configvalue = 1Create Credentials and data sourcecreate master key encryption by password = ‘<some password>’
go
create credential s3_dc with identity = ‘S3 Access Key’, SECRET = ‘<access key>:<secret key>’
go
create external data source s3_ds
with (
location = ‘s3://<bucket_name>/<path>/’,
credential = s3_dc,
connection_options = ‘{
“s3”:{
“url_style”:”virtual_hosted”
}
}’
)
go Create External TableCREATE EXTERNAL FILE FORMAT ParquetFileFormat WITH(FORMAT_TYPE = PARQUET)
GO
CREATE EXTERNAL TABLE sample_table(
code varchar,
the_date date,
ref_code varchar,
value1 int,
value2 int,
value3 int,
cost numeric(12,2),
peak_value varchar
)
WITH (
LOCATION = ‘/sample_table/’,
DATA_SOURCE = s3_ds,
FILE_FORMAT = ParquetFileFormat
)
GO Read More
Getting last email for Microsoft 365 Group via Graph
Hello,
Is there a way to get information about Last Received mail for Microsoft 365 Group using Graph?
In the past I used:
Get-ExoMailboxFolderStatistics –Identity $mailbox–IncludeOldestAndNewestITems –FolderScope Inbox
but it takes too long if there are many mailboxes.
I also tried https://graph.microsoft.com/v1.0/users/<M365Group_mailAddress>/mailFolders?`$top=1
but that didn’t work, most likely because mailbox doesn’t exist from Exchange perspective.
Any ideas?
Hello,Is there a way to get information about Last Received mail for Microsoft 365 Group using Graph?In the past I used:Get-ExoMailboxFolderStatistics -Identity $mailbox-IncludeOldestAndNewestITems -FolderScope Inboxbut it takes too long if there are many mailboxes. I also tried https://graph.microsoft.com/v1.0/users/<M365Group_mailAddress>/mailFolders?`$top=1but that didn’t work, most likely because mailbox doesn’t exist from Exchange perspective.Any ideas? Read More
New Blog | Detect compromised RDP sessions with Microsoft Defender for Endpoint
By SaarCohen
Human operators play a significant part in planning, managing, and executing cyber-attacks. During each phase of their operations, they learn and adapt by observing the victims’ networks and leveraging intelligence and social engineering. One of the most common tools human operators use is Remote Desktop Protocol (RDP), which gives attackers not only control, but also Graphical User Interface (GUI) visibility on remote computers. As RDP is such a popular tool in human operated attacks, it allows defenders to use the RDP context as a strong incriminator of suspicious activities. And therefore, detect Indicators of Compromise (IOCs) and act on them.
That’s why today Microsoft Defender for Endpoint is enhancing the RDP data by adding a detailed layer of session information, so you can more easily identify potentially compromised devices in your organization. This layer provides you with more details into the RDP session within the context of the activity initiated, simplifying correlation and increasing the accuracy of threat detection and proactive hunting.
By Detect compromised RDP sessions with Microsoft Defender for Endpoint
By SaarCohen
Human operators play a significant part in planning, managing, and executing cyber-attacks. During each phase of their operations, they learn and adapt by observing the victims’ networks and leveraging intelligence and social engineering. One of the most common tools human operators use is Remote Desktop Protocol (RDP), which gives attackers not only control, but also Graphical User Interface (GUI) visibility on remote computers. As RDP is such a popular tool in human operated attacks, it allows defenders to use the RDP context as a strong incriminator of suspicious activities. And therefore, detect Indicators of Compromise (IOCs) and act on them.
That’s why today Microsoft Defender for Endpoint is enhancing the RDP data by adding a detailed layer of session information, so you can more easily identify potentially compromised devices in your organization. This layer provides you with more details into the RDP session within the context of the activity initiated, simplifying correlation and increasing the accuracy of threat detection and proactive hunting.
By Detect compromised RDP sessions with Microsoft Defender for Endpoint