Month: June 2024
Subform updating table, but not when you open form individually to input data
I have an employee form connected to the employee table. I have a contacts form connected to the Contacts Table. When I drop in the Contact Form into the Employee form and type a few notes. It updates in the table and links the ID’s together. If I create a button just to open the contacts form to modify on that employee’s record….it updates the table, but no linked ID now.
I have an employee form connected to the employee table. I have a contacts form connected to the Contacts Table. When I drop in the Contact Form into the Employee form and type a few notes. It updates in the table and links the ID’s together. If I create a button just to open the contacts form to modify on that employee’s record….it updates the table, but no linked ID now. Read More
Outlook Mobile App – only showing like 24 hours worth of email
Anyone come across the issue where you are only seeing like 24 hours worth of emails in the Oulook Mobile App. It’s happening on the Android and some of the Apple devices.
Anyone come across the issue where you are only seeing like 24 hours worth of emails in the Oulook Mobile App. It’s happening on the Android and some of the Apple devices. Read More
ADF throwing error while connecting thru SFTP
Hi there,
There are files coming from the partners in csv format. They upload it on FTP and a moveit job moves them to SFTP location. When the ADF uses the SFTP linked service to read the file it errors out with the following error. This file does not have any data issue.
However if I use the Azure blob storage and upload the file there and read it using ADF’s Azure blob storage linked service it gets processed perfectly.
Could you please help me understand why I am getting the error only when processing the file using SFTP?
Error while using SFTP –
ErrorCode=DelimitedTextMoreColumnsThanDefined,’Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Error found when processing ‘Csv/Tsv Format Text’ source ‘TodaysFile_06_18_2024.csv’ with row number 88186: found more columns than expected column count 25.,Source=Microsoft.DataTransfer.Common,’
Hi there,There are files coming from the partners in csv format. They upload it on FTP and a moveit job moves them to SFTP location. When the ADF uses the SFTP linked service to read the file it errors out with the following error. This file does not have any data issue. However if I use the Azure blob storage and upload the file there and read it using ADF’s Azure blob storage linked service it gets processed perfectly. Could you please help me understand why I am getting the error only when processing the file using SFTP? Error while using SFTP – ErrorCode=DelimitedTextMoreColumnsThanDefined,’Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Error found when processing ‘Csv/Tsv Format Text’ source ‘TodaysFile_06_18_2024.csv’ with row number 88186: found more columns than expected column count 25.,Source=Microsoft.DataTransfer.Common,’ Read More
cancelled booking
Hello – if a user cancels a booking, where do i find the name/email of the user who cancelled?
Hello – if a user cancels a booking, where do i find the name/email of the user who cancelled? Read More
Name change & profile picture sync takes weeks
Hello,
When we change a users display name or profile picture, it takes weeks for the change to fully synchronize before it shows the same correct information in Teams, Outlook, SharePoint.
During the week after the change has taken place, the new information can we viewed in Teams but not in SharePoint, and the day after it can be vice versa – and the day after that you could not be able to view the new information anywhere, and then the day after that it could come back to being visible in only Teams. and on it goes.
But after a couple of weeks or a month+, everything comes in order an the new display name or profile picture is correctly visible everywhere.
Why is this happening, and is there some kind of manual sync I could run to get the change to pull thorugh at once?
Best regards,
Linus
Hello, When we change a users display name or profile picture, it takes weeks for the change to fully synchronize before it shows the same correct information in Teams, Outlook, SharePoint. During the week after the change has taken place, the new information can we viewed in Teams but not in SharePoint, and the day after it can be vice versa – and the day after that you could not be able to view the new information anywhere, and then the day after that it could come back to being visible in only Teams. and on it goes. But after a couple of weeks or a month+, everything comes in order an the new display name or profile picture is correctly visible everywhere. Why is this happening, and is there some kind of manual sync I could run to get the change to pull thorugh at once? Best regards,Linus Read More
Nonprofit CRM Donorfy builds the future of fundraising with Microsoft
Donorfy’s cloud-based nonprofit CRM platform provides fundraisers with the tools to spend less time on busywork—and focus on generating the revenue that drives impact. With Donorfy, charity startups and established giants alike save time on recurring fundraising tasks, deepen relationships with new and ongoing supporters, and increase revenue to fund their important work. “We deliver simplicity,” explains Ben Brett, CTO and co-founder of Donorfy. “Our platform takes care of the nuts and bolts, and it removes tedious manual steps, so our customers can grow.”
To expand its reach and accelerate the development of advanced features, Donorfy joined the Microsoft Tech for Social Impact (TSI) Digital Natives Partner Program. The program brings on cloud-first SaaS companies and independent service vendors (ISVs) to serve nonprofits through innovative technology solutions in line with Microsoft technology offerings.
“The Digital Natives partnership helps Donorfy, but ultimately it helps our customers do more,” says Ben Twyman, Chief Commercial Officer at Donorfy. “Working with Microsoft—probably the most advanced company in the world working on AI and related specialties—means we can bring that expertise to our customer base, and Digital Natives helps us shout out how we’re helping customers reach their mission faster.”
“We are proud to partner with Donorfy to help charities in the UK and beyond achieve their mission,” says Craig Parker, Global SaaS Partnerships Lead for the Digital Natives Partner Program at Microsoft Tech for Social Impact. “Donorfy, as a leading fundraising CRM platform, embraces innovation and shares our vision of how AI can accelerate social good.”
As a cloud-native company, Donorfy has always been built in Microsoft Azure. “By using a single supplier—Microsoft—we have an end-to-end chain that enables us to build a great solution,” Brett explains. “The scalability in Azure saves a massive amount of time and allows us to get on with our jobs of supporting charities.”
The ongoing relationship with Microsoft made joining Digital Natives a clear next step. Digital Natives supports nonprofit-focused businesses like Donorfy to reach more customers and develop new solutions that empower mission-driven organizations.
“There’s a good synergy between Microsoft and Donorfy, in that we’re both making technology simple, accessible, and smart enough to make the world better and fairer,” Twyman says. “With this partnership, both companies win and achieve.”
Donorfy’s cloud-based nonprofit CRM platform provides fundraisers with the tools to spend less time on busywork—and focus on generating the revenue that drives impact. With Donorfy, charity startups and established giants alike save time on recurring fundraising tasks, deepen relationships with new and ongoing supporters, and increase revenue to fund their important work. “We deliver simplicity,” explains Ben Brett, CTO and co-founder of Donorfy. “Our platform takes care of the nuts and bolts, and it removes tedious manual steps, so our customers can grow.”
To expand its reach and accelerate the development of advanced features, Donorfy joined the Microsoft Tech for Social Impact (TSI) Digital Natives Partner Program. The program brings on cloud-first SaaS companies and independent service vendors (ISVs) to serve nonprofits through innovative technology solutions in line with Microsoft technology offerings.
“The Digital Natives partnership helps Donorfy, but ultimately it helps our customers do more,” says Ben Twyman, Chief Commercial Officer at Donorfy. “Working with Microsoft—probably the most advanced company in the world working on AI and related specialties—means we can bring that expertise to our customer base, and Digital Natives helps us shout out how we’re helping customers reach their mission faster.”
“We are proud to partner with Donorfy to help charities in the UK and beyond achieve their mission,” says Craig Parker, Global SaaS Partnerships Lead for the Digital Natives Partner Program at Microsoft Tech for Social Impact. “Donorfy, as a leading fundraising CRM platform, embraces innovation and shares our vision of how AI can accelerate social good.”
As a cloud-native company, Donorfy has always been built in Microsoft Azure. “By using a single supplier—Microsoft—we have an end-to-end chain that enables us to build a great solution,” Brett explains. “The scalability in Azure saves a massive amount of time and allows us to get on with our jobs of supporting charities.”
The ongoing relationship with Microsoft made joining Digital Natives a clear next step. Digital Natives supports nonprofit-focused businesses like Donorfy to reach more customers and develop new solutions that empower mission-driven organizations.
“There’s a good synergy between Microsoft and Donorfy, in that we’re both making technology simple, accessible, and smart enough to make the world better and fairer,” Twyman says. “With this partnership, both companies win and achieve.”
Read the full case study Read More
Microsoft Entra ID Governance licensing clarifications
In the past few weeks, we’ve announced the general availability of Microsoft Entra External ID and Microsoft Entra ID multi-tenant collaboration. We’ve received requests for more detail from some of you regarding licensing, so I’d like to provide additional clarity for both of these scenarios.
One person, one license
Included in the first announcement of more multi-tenant organization (MTO) features to enhance collaboration between users, we stated that only one Microsoft Entra ID P1 license is required per employee per multi-tenant organization. Expanding on that, the term “multi-tenant organization” has two descriptions: an organization that owns and operates more than one tenant; and a set of features that enhance the collaboration experience for users between these tenants. However, your organization doesn’t have to deploy those capabilities to take advantage of the one person, one license philosophy. An organization that owns and operates multiple tenants only needs one Entra ID license per employee across those tenants. The same philosophy applies to Entra ID Governance: the organization only needs one license per person to govern the identities of these users across these tenants.
To illustrate this scenario, let’s consider an organization called Contoso, which owns ZT Tires and Tailspin Toys. Mallory is hired by Contoso, which uses Lifecycle Workflows in Entra ID Governance to onboard her user account and grant her access to the resources she needs for her job. Her account receives an access package with an entitlement to ZT Tires’ ERP app, and she requests access to Tailspin Toys inventory management app. Because Mallory has an Entra ID Governance license in the Contoso tenant, her identity can be governed in the ZT Tires and Tailspin Toys tenants with no additional governance licenses – one person, one license.
Entra ID Governance in Microsoft Entra External ID
The other announcement covered Entra External ID, Microsoft’s solution to secure customer and business collaborator access to applications. In November, I blogged about the licensing model to govern the identities of business guests in the B2B scenario for Entra External ID and shared that pricing would be $0.75 per actively governed identity per month. Because metered, usage-based pricing to govern the identities of business guests is a different model than the existing, licensed-based pricing model to govern the identities of employees, I’d like to share more detail.
A business guest identity in Entra External ID will accrue a single $0.75 charge in any month in which that identity is actively governed, no matter how many governance actions are taken on that identity. For example:
A Contoso employee named Gerhart collaborates with Pradeep of Woodgrove Bank to produce Contoso’s quarterly financial statements. Contoso has deployed Entra External ID for its business partners such as Woodgrove Bank. In April, Pradeep accesses Contoso’s Microsoft Teams where Gerhart stores his quarterly reporting documents, but his Entra External ID has no identity governance actions taken on them, so it doesn’t accrue any charges.
In May, Pradeep receives an access package with an entitlement to Contoso’s accounting system, and Gerhart reviews Pradeep’s existing access to Contoso’s inventory management database, as well as to the Teams with the quarterly reporting documents. Because Pradeep’s identity in Entra External ID had identity governance actions taken on it, Contoso will accrue a $0.75 charge. Note that the charge is applied once, even though there were three identity governance actions taken during the month. Once that Entra External ID identity was governed in May, additional identity governance actions do not generate additional charges for that identity in May.
To learn more about Microsoft Entra ID Governance licensing, visit the Licensing Fundamentals page.
Read more on this topic
Entra ID multi-tenant collaboration
Microsoft Entra External ID general availability
Learn more about Microsoft Entra
Prevent identity attacks, ensure least privilege access, unify access controls, and improve the experience for users with comprehensive identity and network access solutions across on-premises and clouds.
Microsoft Entra News and Insights | Microsoft Security Blog
Microsoft Entra blog | Tech Community
Microsoft Entra documentation | Microsoft Learn
Microsoft Entra discussions | Microsoft Community
Microsoft Tech Community – Latest Blogs –Read More
Can we change the parameter values of the Simulink blocks in the HDL toolbox?
Hello, I am trying to create a toolchain from Simulink to an FPGA platform, and I am using the HDL Simulink block to design an OFDM system. Can I customize the parameter values of the HDL toolbox blocks? I haven’t been able to find any source code for the HDL toolbox blocks.Hello, I am trying to create a toolchain from Simulink to an FPGA platform, and I am using the HDL Simulink block to design an OFDM system. Can I customize the parameter values of the HDL toolbox blocks? I haven’t been able to find any source code for the HDL toolbox blocks. Hello, I am trying to create a toolchain from Simulink to an FPGA platform, and I am using the HDL Simulink block to design an OFDM system. Can I customize the parameter values of the HDL toolbox blocks? I haven’t been able to find any source code for the HDL toolbox blocks. #simulink#ofdm#hdltoolbox MATLAB Answers — New Questions
Convert vector of numeric values into a vector of equivalent 8-bit ‘int8’ binary values
Hi,
I am trying to convert a 1×4 numeric vector into the equivalent 8-bit binary ‘int8’ value which is also a 1×4 vector, however I am creating a 1×8 vector which appears to be the binary value of 8 which is the second element.
So each numeric element I want to convert to it’s 8-bit ‘int8’ binary equivalent and store it in the same element in another vector.
Assuming I have not messed up my Two’s Compliment I am expecting the row vector to be displayed as;
[10011110 01111000 01001111 00001000]
Any help would be appreciated.
x = [-98 8 49 120];
y = zeros(1,length(x));
for idx = 1:length(x)
y = bitget(x(idx),8:-1:1,’int8′);
end
disp(y)Hi,
I am trying to convert a 1×4 numeric vector into the equivalent 8-bit binary ‘int8’ value which is also a 1×4 vector, however I am creating a 1×8 vector which appears to be the binary value of 8 which is the second element.
So each numeric element I want to convert to it’s 8-bit ‘int8’ binary equivalent and store it in the same element in another vector.
Assuming I have not messed up my Two’s Compliment I am expecting the row vector to be displayed as;
[10011110 01111000 01001111 00001000]
Any help would be appreciated.
x = [-98 8 49 120];
y = zeros(1,length(x));
for idx = 1:length(x)
y = bitget(x(idx),8:-1:1,’int8′);
end
disp(y) Hi,
I am trying to convert a 1×4 numeric vector into the equivalent 8-bit binary ‘int8’ value which is also a 1×4 vector, however I am creating a 1×8 vector which appears to be the binary value of 8 which is the second element.
So each numeric element I want to convert to it’s 8-bit ‘int8’ binary equivalent and store it in the same element in another vector.
Assuming I have not messed up my Two’s Compliment I am expecting the row vector to be displayed as;
[10011110 01111000 01001111 00001000]
Any help would be appreciated.
x = [-98 8 49 120];
y = zeros(1,length(x));
for idx = 1:length(x)
y = bitget(x(idx),8:-1:1,’int8′);
end
disp(y) int8, bitget, binary MATLAB Answers — New Questions
How can people who are bad at math and have no programming aptitude learn MATLAB? (Long question)
Dear MATLAB community,
How can I help my close friend who’s bad at math and programming learn MATLAB?
He’s a final year chemical engineering student who struggles even to plot two functions on the same graph in his computational fluid dynamics class (there was no prereq for matlab skills).
In his first year, I saw him get dragged through the introductory engineering classes which was his first encounter with MATLAB. Students were taught a few rudimentary programming skills and then were expected to make a code for a ‘simple’ tic-tac-toe game. It took him hours of blank looks and tutoring to even understand the simplest of boolean operators. He was never able to write a working function without the supervision of a friend or tutor. Needless to say, he was permanently scarred by the experience and swore to avoid using it forever.
After 3 years of avoiding MATLAB, he realised how not knowing it hurt him during his final year project. He had to solve a system of pdes to model the performance of a reactor and practically speaking, MATLAB was the most suitable software at hand. He ended up having to get a friend to help him code the equations in while also having to oversimplify his model.
The weird thing is that: most students from his chemical engineering faculty were not expected or encouraged to use MATLAB, almost all of their prior assignments required no use of MATLAB except that infamous first year course, and most of his peers also avoided using MATLAB and resorted to Excel. It is my understanding that Excel cannot match MATLAB’s efficiency and clarity when solving calculus problems so it was not uncommon to see extremely long Excel spreadsheets.
Anyway, my friend is, with the help of a friend’s past year MATLAB codes, trying to finish up his computational fluid dynamics assignment that’s due soon. He finishes university in 2 weeks time.
Even though he knows that not every engineer has to use MATLAB in the workplace, he somehow wishes he was able to learn MATLAB at his glacial pace. I find it such a pity that he was never able to keep up with the pace of learning that was expected which begs the question: are students who are too slow at learning programming better of in a different field of study?
If you’ve managed to read to the end of this, thank you so much. I just don’t know how to help my friend and I’m hoping some of you might be able to suggest how I can help him be better at it. I believe he has potential but needs special help when it comes to MATLAB.
All helpful and constructive suggestions considered,
Thank You AllDear MATLAB community,
How can I help my close friend who’s bad at math and programming learn MATLAB?
He’s a final year chemical engineering student who struggles even to plot two functions on the same graph in his computational fluid dynamics class (there was no prereq for matlab skills).
In his first year, I saw him get dragged through the introductory engineering classes which was his first encounter with MATLAB. Students were taught a few rudimentary programming skills and then were expected to make a code for a ‘simple’ tic-tac-toe game. It took him hours of blank looks and tutoring to even understand the simplest of boolean operators. He was never able to write a working function without the supervision of a friend or tutor. Needless to say, he was permanently scarred by the experience and swore to avoid using it forever.
After 3 years of avoiding MATLAB, he realised how not knowing it hurt him during his final year project. He had to solve a system of pdes to model the performance of a reactor and practically speaking, MATLAB was the most suitable software at hand. He ended up having to get a friend to help him code the equations in while also having to oversimplify his model.
The weird thing is that: most students from his chemical engineering faculty were not expected or encouraged to use MATLAB, almost all of their prior assignments required no use of MATLAB except that infamous first year course, and most of his peers also avoided using MATLAB and resorted to Excel. It is my understanding that Excel cannot match MATLAB’s efficiency and clarity when solving calculus problems so it was not uncommon to see extremely long Excel spreadsheets.
Anyway, my friend is, with the help of a friend’s past year MATLAB codes, trying to finish up his computational fluid dynamics assignment that’s due soon. He finishes university in 2 weeks time.
Even though he knows that not every engineer has to use MATLAB in the workplace, he somehow wishes he was able to learn MATLAB at his glacial pace. I find it such a pity that he was never able to keep up with the pace of learning that was expected which begs the question: are students who are too slow at learning programming better of in a different field of study?
If you’ve managed to read to the end of this, thank you so much. I just don’t know how to help my friend and I’m hoping some of you might be able to suggest how I can help him be better at it. I believe he has potential but needs special help when it comes to MATLAB.
All helpful and constructive suggestions considered,
Thank You All Dear MATLAB community,
How can I help my close friend who’s bad at math and programming learn MATLAB?
He’s a final year chemical engineering student who struggles even to plot two functions on the same graph in his computational fluid dynamics class (there was no prereq for matlab skills).
In his first year, I saw him get dragged through the introductory engineering classes which was his first encounter with MATLAB. Students were taught a few rudimentary programming skills and then were expected to make a code for a ‘simple’ tic-tac-toe game. It took him hours of blank looks and tutoring to even understand the simplest of boolean operators. He was never able to write a working function without the supervision of a friend or tutor. Needless to say, he was permanently scarred by the experience and swore to avoid using it forever.
After 3 years of avoiding MATLAB, he realised how not knowing it hurt him during his final year project. He had to solve a system of pdes to model the performance of a reactor and practically speaking, MATLAB was the most suitable software at hand. He ended up having to get a friend to help him code the equations in while also having to oversimplify his model.
The weird thing is that: most students from his chemical engineering faculty were not expected or encouraged to use MATLAB, almost all of their prior assignments required no use of MATLAB except that infamous first year course, and most of his peers also avoided using MATLAB and resorted to Excel. It is my understanding that Excel cannot match MATLAB’s efficiency and clarity when solving calculus problems so it was not uncommon to see extremely long Excel spreadsheets.
Anyway, my friend is, with the help of a friend’s past year MATLAB codes, trying to finish up his computational fluid dynamics assignment that’s due soon. He finishes university in 2 weeks time.
Even though he knows that not every engineer has to use MATLAB in the workplace, he somehow wishes he was able to learn MATLAB at his glacial pace. I find it such a pity that he was never able to keep up with the pace of learning that was expected which begs the question: are students who are too slow at learning programming better of in a different field of study?
If you’ve managed to read to the end of this, thank you so much. I just don’t know how to help my friend and I’m hoping some of you might be able to suggest how I can help him be better at it. I believe he has potential but needs special help when it comes to MATLAB.
All helpful and constructive suggestions considered,
Thank You All #beginner MATLAB Answers — New Questions
I need to apply a $25 credit to our Microsoft 365 subscription
Our subscription to Microsoft 365 will automatically renew in July. The subscription is under my wife’s accout (we share the app). I have a $25 credit in my Microsoft account. How can I apply this to the subscription charge? Thank you.
Our subscription to Microsoft 365 will automatically renew in July. The subscription is under my wife’s accout (we share the app). I have a $25 credit in my Microsoft account. How can I apply this to the subscription charge? Thank you. Read More
How would Azure create IoT systems for FM with focus on sustainability?
How would Azure develop IT solutions for FM that are transparent on their impact on sustainability? How could we demonstrate to clients the efficiency savings and sustainability impact?
How would Azure develop IT solutions for FM that are transparent on their impact on sustainability? How could we demonstrate to clients the efficiency savings and sustainability impact? Read More
Is there a place to post .NET jobs?
My company is looking ot hire strong developer and I was wondering if there was a place within this community where I could post the job
My company is looking ot hire strong developer and I was wondering if there was a place within this community where I could post the job Read More
Subform updating table, but not when open form
I have an employee form connected to the employee table. I have a contacts form connected to the Contacts Table. When I drop in the Contact Form into the Employee form and type a few notes. It updates in the table and links the ID’s together. If I create a button just to open the contacts form to modify on that employee’s record….it updates the table, but no linked ID now.
I have an employee form connected to the employee table. I have a contacts form connected to the Contacts Table. When I drop in the Contact Form into the Employee form and type a few notes. It updates in the table and links the ID’s together. If I create a button just to open the contacts form to modify on that employee’s record….it updates the table, but no linked ID now. Read More
Octo Tempest: Hybrid identity compromise recovery
Have you ever gone toe to toe with the threat actor known as Octo Tempest? This increasingly aggressive threat actor group has evolved their targeting, outcomes, and monetization over the past two years to become a dominant force in the world of cybercrime. But what exactly defines this entity, and why should we proceed with caution when encountering them?
Octo Tempest (formerly DEV-0875) is a group known for employing social engineering, intimidation, and other human-centric tactics to gain initial access into an environment, granting themselves privilege to cloud and on-premises resources before exfiltrating data and unleashing ransomware across an environment. Their ability to penetrate and move around identity systems with relative ease encapsulates the essence of Octo Tempest and is the purpose of this blog post. Their activities have been closely associated with:
SIM swapping scams: Seize control of a victim’s phone number to circumvent multifactor authentication.
Identity compromise: Initiate password spray attacks or phishing campaigns to gain initial access and create federated backdoors to ensure persistence.
Data breaches: Infiltrate the networks of organizations to exfiltrate confidential data.
Ransomware attacks: Encrypt a victim’s data and demand primary, secondary or tertiary ransom fees to refrain from disclosing any information or release the decryption key to enable recovery.
Figure 1: The evolution of Octo Tempest’s targeting, actions, outcomes, and monetization.
Some key considerations to keep in mind for Octo Tempest are:
Language fluency: Octo Tempest purportedly operates predominantly in native English, heightening the risk for unsuspecting targets.
Dynamic: Have been known to pivot quickly and change their tactics depending on the target organizations response.
Broad attack scope: They target diverse businesses ranging from telecommunications to technology enterprises.
Collaborative ventures: Octo Tempest may forge alliances with other cybercrime cohorts, such as ransomware syndicates, amplifying the impact of their assaults.
As our adversaries adapt their tactics to match the changing defense landscape, it’s essential for us to continually define and refine our response strategies. This requires us to promptly utilize forensic evidence and efficiently establish administrative control over our identity and access management services. In pursuit of this goal, Microsoft Incident Response has developed a response playbook that has proven effective in real-world situations. Below, we present this playbook to empower you to tackle the challenges posed by Octo Tempest, ensuring the smooth restoration of critical business services such as Microsoft Entra ID and Active Directory Domain Services.
Cloud eviction
We begin with the cloud eviction process. If any actor takes control of the identity plane in Microsoft Entra ID, a set of steps should be followed to hit reset and take back administrative control of the environment. Here are some tactical measures employed by the Microsoft Incident Response team to ensure the security of the cloud identity plane:
Figure 2: Cloud response playbook.
Break glass accounts
Emergency scenarios require emergency access. For this purpose, one or two administrative accounts should be established. These accounts should be exempted from Conditional Access policies to ensure access in critical situations, monitored to verify their non-use, and passwords should be securely stored offline whenever feasible.
More information on emergency access accounts can be found here: Manage emergency access admin accounts – Microsoft Entra ID | Microsoft Learn.
Federation
Octo Tempest leverages cloud-born federation features to take control of a victim’s environment, allowing for the impersonation of any user inside the environment, even if multifactor authentication (MFA) is enabled. While this is a damaging technique, it is relatively simple to mitigate by logging in via the Microsoft Graph PowerShell module and setting the domain back from Federated to Managed. Doing so breaks the relationship and prevents the threat actor from minting further tokens.
Connect to your Azure/Office 365 tenant by running the following PowerShell cmdlet and entering your Global Admin Credentials:
Connect-MgGraph
Change federation authentication from Federated to Managed running this cmdlet:
Update-MgDomain -DomainId “test.contoso.com” -BodyParameter @{AuthenticationType=”Managed”}
Service principals
Service principals have their own identities, credentials, roles, and permissions, and can be used to access resources or perform actions on behalf of the applications or services they represent. These have been used by Octo Tempest for persistence in compromised environments. Microsoft Incident Response recommends reviewing all service principals and removing or reducing permissions as needed.
Conditional Access policies
These policies govern how an application or identity can access Microsoft Entra ID or your organization resources and configuring these appropriately ensures that only authorized users are accessing company data and services. Microsoft provides template policies that are simple to implement. Microsoft Incident Response recommends using the following set of policies to secure any environment.
Note: Any administrative account used to make a policy will be automatically excluded from it. These accounts should be removed from exclusions and replaced with a break glass account.
Figure 3: Conditional Access policy templates.
Conditional Access policy: Require multifactor authentication for all users
This policy is used to enhance the security of an organization’s data and applications by ensuring that only authorized users can access them. Octo Tempest is often seen performing SIM swapping and social engineering attacks, and MFA is now more of a speed bump than a roadblock to many threat actors. This step is essential.
Conditional Access policy: Require phishing-resistant multifactor authentication for administrators
This policy is used to safeguard access to portals and admin accounts. It is recommended to use a modern phishing-resistant MFA type which requires an interaction between the authentication method and the sign-in surface such as a passkey, Windows Hello for Business, or certificate-based authentication.
Note: Exclude the Entra ID Sync account. This account is essential for the synchronization process to function properly.
Conditional Access policy: Block legacy authentication
Implementing a Conditional Access policy to block legacy access prohibits users from signing in to Microsoft Entra ID using vulnerable protocols. Keep in mind that this could block valid connections to your environment. To avoid disruption, follow the steps in this guide.
Conditional Access policy: Require password change for high-risk users
By implementing a user risk Conditional Access policy, administrators can tailor access permissions or security protocols based on the assessed risk level of each user. Read more about user risk here.
Conditional Access policy: Require multifactor authentication for risky sign-ins
This policy can be used to block or challenge suspicious sign-ins and prevent unauthorized access to resources.
Segregate Cloud admin accounts
Administrative accounts should always be segregated to ensure proper isolation of privileged credentials. This is particularly true for cloud admin accounts to prevent the vertical movement of privileged identities between on-premises Active Directory and Microsoft Entra ID.
In addition to the enforced controls provided by Microsoft Entra ID for privileged accounts, organizations should establish process controls to restrict password resets and manipulation of MFA mechanisms to only authorized individuals.
During a tactical takeback, it’s essential to revoke permissions from old admin accounts, create entirely new accounts, and ensure that the new accounts are secured with modern MFA methods, such device-bound passkeys managed in the Microsoft Authenticator app.
Review Azure resources
Octo Tempest has a history of manipulating resources such as Network Security Groups (NSGs), Azure Firewall, and granting themselves privileged roles within Azure Management Groups and Subscriptions using the ‘Elevate Access’ option in Microsoft Entra ID.
It’s imperative to conduct regular, and thorough, reviews of these services to carefully evaluate all changes to these services and effectively remove Octo Tempest from a cloud environment.
Of particular importance are the Azure SQL Server local admin accounts and the corresponding firewall rules. These areas warrant special attention to mitigate any potential risks posed by Octo Tempest.
Intune Multi-Administrator Approval (MAA)
Intune access policies can be used to implement two-person control of key changes to prevent a compromised admin account from maliciously using Intune, causing additional damage to the environment while mitigation is in progress.
Access policies are supported by the following resources:
Apps – Applies to app deployments but doesn’t apply to app protection policies.
Scripts – Applies to deployment of scripts to devices that run Windows.
Octo Tempest has been known to leverage Intune to deploy ransomware at scale. This risk can be mitigated by enabling the MAA functionality.
Review of MFA registrations
Octo Tempest has a history of registering MFA devices on behalf of standard users and administrators, enabling account persistence. As a precautionary measure, review all MFA registrations during the suspected compromise window and prepare for the potential re-registration of affected users.
On-premises eviction
Additional containment efforts include the on-premises identity systems. There are tried and tested procedures for rebuilding and recovering on-premises Active Directory, post-ransomware, and these same techniques apply to an Octo Tempest intrusion.
Figure 5: On-premises recovery playbook.
Active Directory Forest Recovery
If a threat actor has taken administrative control of an Active Directory environment, complete compromise of all identities, in Active Directory, and their credentials should be assumed. In this scenario, on-premises recovery follows this Microsoft Learn article on full forest recovery:
Active Directory Forest Recovery – Procedures | Microsoft Learn
If there are good backups of at least one Domain Controller for each domain in the compromised forest, these should be restored. If this option is not available, there are other methods to isolate Domain Controllers for recovery. This can be accomplished with snapshots or by moving one good Domain Controller from each domain into an isolated network so that Active Directory sanitization can begin in a protective bubble.
Once this has been achieved, domain recovery can begin. The steps are identical for every domain in the forest:
Metadata cleanup of all other Domain Controllers
Seizing the Flexible Single Master Operations (FSMO) roles
Raising the RID Pool and invalidating the RID Pool
Resetting the Domain Controller computer account password
Resetting the password of KRBTGT twice
Resetting the built-in Administrator password twice
If Read-Only Domain Controllers existed, removing their instance of krbtgt_xxxxx
Resetting inter-domain trust account (ITA) passwords on each side of the parent/child trust
Removing external trusts
Performing an authoritative restore of the SYSVOL content
Cleaning up DNS records for metadata cleaned up Domain Controllers
Resetting the Directory Services Restore Mode (DSRM) password
Removing Global Catalog and promoting to Global Catalog
When these actions have been completed, new Domain Controllers can be built in the isolated environment. Once replication is healthy, the original systems restored from backup can be demoted.
Octo Tempest is known for targeting Key Vaults and Secret Servers. Special attention will need to be paid to these secrets to determine if they were accessed and, if so, to sanitize the credentials contained within.
Tiering model
Restricting privilege escalation is critical to containing any attack since it limits the scope and damage. Identity systems in control of privileged access, and critical systems where identity administrators log onto, are both under the scope of protection.
Microsoft’s official documentation guides customers towards implementing the enterprise access model (EAM) that supersedes the “legacy AD tier model.” The EAM serves as an all-encompassing means of addressing where and how privileged access is used. It includes controls for cloud administration, and even network policy controls to protect legacy systems that lack accounts entirely.
However, the EAM has several limitations. First, it can take months, or even years, for an organization’s architects to map out and implement. Secondly, it spans disjointed controls and operating systems. Lastly, not all of it is relevant to the immediate concern of mitigating Pass-the-Hash (PtH) as outlined here.
Our customers, with on-premises systems, are often looking to implement PtH mitigations yesterday. The AD Tiering model is a good starting point for domain-joined services to satisfy this requirement. It is:
Easier to conceptualize
Has practical implementation guidance
Rollout can be partially automated
The EAM is still a valuable strategy to work towards in an organization’s journey to security; but this is a better goal for after the fires and smoldering embers have been extinguished.
Figure 6: Securing privileged access Enterprise access model – Privileged access | Microsoft Learn.
Segregated privileged accounts
Accounts should be created for each tier of access, and processes should be put in place to ensure that these remain correctly isolated within their tiers.
Control plane isolation
Identify all systems that fall under the control plane. The key rule to follow is that anything that accesses or can manipulate an asset must be treated at the same level as the assets that they manipulate. At this stage of eviction, the control plane is the key focus area. As an example, SCCM being used to patch Domain Controllers must be treated as a control plane asset.
Backup accounts are particularly sensitive targets and must be managed appropriately.
Account disposition
The next phase of on-premises recovery and containment consists of a procedure known as account disposition in which all privileged or sensitive groups are emptied except for the account that is performing the actions. These groups include, but are not limited to:
Built-In Administrators
Domain Admins
Enterprise Admins
Schema Admins
Account Operators
Server Operators
DNS Admins
Group Policy Creator Owners
Any identity that gets removed from these groups goes through the following steps:
Password is reset twice
Account is disabled
Account is marked with Smartcard is required for interactive login
Access control lists (ACLs) are reset to the default values and the adminCount attribute is cleared
Once this is done, build new accounts as per the tiering model. Create new Tier 0 identities for only the few staff that require this level of access, with a complex password and marked with the Account is sensitive and cannot be delegated flag.
Access Control List (ACL) review
Microsoft Incident Response has found a plethora of overly-permissive access control entries (ACEs) within critical areas of Active Directory of many environments. These ACEs may be at the root of the domain, on AdminSDHolder, or on Organizational Units that hold critical services. A review of all the ACEs in the access control lists (ACLs) of these sensitive areas within Active Directory is performed, and unnecessary permissions are removed.
Mass password reset
In the event of a domain compromise, a mass password reset will need to be conducted to ensure that Octo Tempest does not have access to valid credentials. The method in which a mass password reset occurs will vary based on the needs of the organization and acceptable administrative overhead. If we simply write a script that gets all user accounts (other than the person executing the code) and resets the password twice to a random password, no one will know their own password and, therefore, will open tickets with the helpdesk. This could lead to a very busy day for those members of the helpdesk (who also don’t know their own password).
Some examples of mass password reset methods, that we have seen in the field, include but are not limited to:
All at once: Get every single user (other than the newly created tier 0 accounts) and reset the password twice to a random password. Have enough helpdesk staff to be able to handle the administrative burden.
Phased reset by OU, geographic location, department, etc.: This method targets a community of individuals in a more phased out approach which is less of an initial hit to the helpdesk.
Service account password resets first, humans second: Some organizations start with the service account passwords first and then move to the human user accounts in the next phase.
Whichever method you choose to use for your mass password resets, ensure that you have an attestation mechanism in place to be able to accurately confirm that the person calling the helpdesk to get their new password (or enable Self-Service Password Reset) can prove they are who they say they are. An example of attestation would be a video conference call with the end user and the helpdesk and showing some sort of identification (for instance a work badge) on the screen.
It is recommended to also deploy and leverage Microsoft Entra ID Password Protection to prevent users from choosing weak or insecure passwords during this event.
Conclusion
The battle against Octo Tempest underscores the importance of a multi-faceted and proactive approach to cybersecurity. By understanding a threat actors’ tactics, techniques and procedures and by implementing the outlined incident response strategies, organizations can safeguard their identity infrastructure against this adversary and ensure all pervasive traces are eliminated. Incident Response is a continuous process of learning, adapting, and securing environments against ever-evolving threats.
Microsoft Tech Community – Latest Blogs –Read More
Announcing the General Availability of Change Actor
Change Analysis
Identifying who made a change to your Azure resources and how the change was made just became easier! With Change Analysis, you can now see who initiated the change and with which client that change was made, for changes across all your tenants and subscriptions.
Audit, troubleshoot, and govern at scale
Changes should be available in under five minutes and are queryable for fourteen days. In addition, this support includes the ability to craft charts and pin results to Azure dashboards based on specific change queries.
What’s new: Actor Functionality
Who made the change
This can be either ‘AppId’ (client or Azure service) or email-ID of the user
changedBy: elizabeth@contoso.com
With which client the change was made
clientType: portal
What operation was called
Azure resource provider operations | Microsoft Learn
Try it out
You can try it out by querying the “resourcechanges” or “resourcecontainerchanges” tables in Azure Resource Graph.
Sample Queries
Here is documentation on how to query resourcechanges and resourcecontainerchanges in Azure Resource Graph. Get resource changes – Azure Resource Graph | Microsoft Learn
The following queries all show changes made within the last 7 days.
Summarization of who and which client were used to make resource changes in the last 7 days ordered by the number of changes
resourcechanges
| extend changeTime = todatetime(properties.changeAttributes.timestamp),
targetResourceId = tostring(properties.targetResourceId),
changeType = tostring(properties.changeType), changedBy = tostring(properties.changeAttributes.changedBy),
changedByType = properties.changeAttributes.changedByType,
clientType = tostring(properties.changeAttributes.clientType)
| where changeTime > ago(7d)
| project changeType, changedBy, changedByType, clientType
| summarize count() by changedBy, changeType, clientType
| order by count_ desc
Summarization of who and what operations were used to make resource changes ordered by the number of changes
resourcechanges
| extend changeTime = todatetime(properties.changeAttributes.timestamp),
targetResourceId = tostring(properties.targetResourceId),
operation = tostring(properties.changeAttributes.operation),
changeType = tostring(properties.changeType), changedBy = tostring(properties.changeAttributes.changedBy),
changedByType = properties.changeAttributes.changedByType,
clientType = tostring(properties.changeAttributes.clientType)
| project changeType, changedBy, operation
| summarize count() by changedBy, operation
| order by count_ desc
List resource container (resource group, subscription, and management group) changes. who made the change, what client was used, and which operation was called, ordered by the time of the change
resourcecontainerchanges
| extend changeTime = todatetime(properties.changeAttributes.timestamp),
targetResourceId = tostring(properties.targetResourceId),
operation=tostring(properties.changeAttributes.operation),
changeType = tostring(properties.changeType), changedBy = tostring(properties.changeAttributes.changedBy),
changedByType = properties.changeAttributes.changedByType,
clientType = tostring(properties.changeAttributes.clientType)
| project changeTime, changeType, changedBy, changedByType, clientType, operation, targetResourceId
| order by changeTime desc
FAQ
How do I use Change Analysis?
Change Analysis can be used by querying the resourcechanges or resourcecontainerchanges tables in Azure Resource Graph, such as with Azure Resource Graph Explorer in the Azure Portal or through the Azure Resource Graph APIs.
More information can be found here: Get resource changes – Azure Resource Graph | Microsoft Learn.
What does unknown mean?
Unknown is displayed when the change happened on a client that is unrecognized. Clients are recognized based on the user agent and client application id associated with the original change request.
What does System mean?
System is displayed as a changedBy value when a background change occurred that wasn’t correlated with any direct user action.
What resources are included?
You can try it out by querying the “resourcechanges” or “resourcecontainerchanges” tables in Azure Resource Graph.
Questions and Feedback
If you have any other questions or input, you can reach out to the team at argchange@microsoft.com
Share Product feedback and ideas with us at Azure Governance · Community
For more information about Change Analysis Get resource changes – Azure Resource Graph | Microsoft Learn
Microsoft Tech Community – Latest Blogs –Read More
Breaking the Speed Limit with WEKA: The World’s Fastest File System on top of Azure Hot Blob
Abstract
Azure Blob Storage is engineered to manage immense volumes of unstructured data efficiently. While utilizing Blob Storage for High-Performance Computing (HPC) tasks presents numerous benefits, including scalability and cost-effectiveness, it also introduces specific challenges. Key among these challenges are data access latency and the potential for performance decline in workloads, particularly noticeable in compute-intensive or real-time applications when accessing data stored in Blob. In this article, we will examine how WEKA’s patented filesystem, WekaFS™, and its parallel processing algorithms accelerate Blob storage performance.
About WEKA
The WEKA® Data Platform was purpose-built to seamlessly and sustainably deliver speed, simplicity, and scale that meets the needs of modern enterprises and research organizations without compromise. Its advanced, software-defined architecture supports next-generation workloads in virtually any location with cloud simplicity and on-premises performance.
At the heart of the WEKA® Data Platform is a modern fully distributed parallel filesystem, WekaFS™ which can span across 1,000’s of NVMe SSD spread across multiple hosts and seamlessly extend itself over compatible object storage.
WEKA in Azure
Many organizations are leveraging Microsoft Azure to run their High-Performance Computing (HPC) applications at scale. As cloud infrastructure becomes integral, users expect the same performance as on-premises deployments. WEKA delivers unbeatable performance for your most demanding applications running in Microsoft Azure supporting high I/O, low latency, small files, and mixed workloads with zero tuning and automatic storage rebalancing.
WEKA software is deployed on a cluster of Microsoft Azure LSv3 VMs with local NVMe SSD to create a high-performance storage layer. WEKA can also take advantage of Azure Blob Storage to scale your namespace at the lowest cost. You can automate your WEKA deployment through HashiCorp Terraform templates for fast easy installation. Data stored with your WEKA environment is accessible to applications in your environment through multiple protocols, including NFS, SMB, POSIX, and S3-compliant applications.
Kent has written an excellent article on WEKA’s SMB performance for HPC Windows Grid Integration. For more, please see:
WEKA Architecture
WEKA is a fully distributed, parallel file system that was written entirely from the ground up to deliver the highest-performance file services designed for NVMe SSD. Unlike traditional parallel file systems which require extensive file system knowledge to deploy and manage, WEKA’s zero-tuning approach to storage allows for easy management from 10’s of terabytes to 100’s of petabytes in scale.
WEKA’s unique architecture in Microsoft Azure, as shown in Figure 1, provides parallel file access via POSIX, NFS, SMB and AKS. It provides a rich enterprise feature set, including but not limited to local and remote snapshots, snap clones, automatic data tiering, dynamic cluster rebalancing, backup, encryption, and quotas (advisory, soft, and hard).
Figure 1 – WekaFS combines NVMe flash with cloud object storage in a single global namespace
Key components to WEKA Data Platform in Azure include:
The infrastructure is deployed directly into a customer’s subscription of choice
WEKA software is deployed across 6 or more Azure LSv3 VMs. The LSv3 VMs are clustered to act as one single device.
The WekaFS™ namespace is extended onto Azure Hot Blob
WekaFS Scale Up and Scale down functions are driven by Azure Logic Apps and Function Apps
All client secrets are kept in Azure Key Vault
Deployment is fully automated using Terraform WEKA Templates
WEKA and Data Tiering
WEKA’s tiering capabilities in Azure integrates seamlessly with Azure Blob Storage. This integration leverages WEKA’s distributed parallel file system, WekaFS™, to extend from local NVMe SSDs on LSv3 VMs (performance tier) to lower cost Azure Blob Storage (capacity tier). WEKA writes incoming data in 4K blocks (commonly referred to as chunks) aligning to NVMe SSD block size, packaged into 1MB extents, and distributes the writes across multiple storage nodes in the cluster (in Azure, a storage node is represented as a LSv3 VM). WEKA then packages the 1MB extents into 64MB objects. Each object can contain data blocks from multiple files. Files smaller than 1 MB are consolidated into a single 64 MB object. For larger files, their parts are distributed across multiple objects.
Figure 2 – WekaFS Tiering to HOT BLOB
How do you retrieve data that is cold? What are the options?
Tiered data is always accessible and is treated as if it was part of the primary file system. Moreover, while data may be tiered, the metadata is always maintained on the SSDs. This allows traversing files and directories without impacting performance.
Consider a scenario where an HPC job has run and outputs are written to WekaFS. In time the outputs file data will be tiered to Azure Blob (capacity tier) to free up the WekaFS (performance tier) to run new jobs. At some later date the data is required again for processing. What are the options?
Cache Tier: When file data is tiered to Blob, the file metadata always remains locally on the flash tier, so all files are available to the applications. WEKA maintains the cache tier (stored in NVMe SSD) within its distributed file system architecture. When file data is rehydrated from Azure Blob Storage, WEKA stores the data in “read cache” for improved subsequent read performance.
Pre-Fetch: WEKA provides a pre-fetch API to instruct the WEKA system to fetch all of the data back from Blob (capacity tier) to NVMe (performance tier). For further details please refer to this link: https://docs.Weka.io/fs/tiering/pre-fetching-from-object-store
Cold read the data directly from Blob. The client will still access the data from the WEKA mount. The data will not be cached by WEKA FS and sent directly to the client
It is bullet #3 that is the had me intrigued. WEKA claims to parallelize reads, so would it be possible to read directly from Blob at a “WEKA Accelerated Rate”?
Testing Methodology:
The test design.
The testing infrastructure consisted of:
6 x Standard_D64_v5 Azure VMs used for clients
20 x L8s_v3 VM instances that were used for the NVME WEKA layer
Hot Zone Redundant Storage (ZRS) enabled Blob
For the test, a 2 TB file system was used on the NVME layer (for metadata) and 20 TB was configured on the HOT BLOB layer.
Figure 3 – WekaFS testing Design.
A 20 TB Filesystem was created on WEKA:
Figure 4 – Sizing the WekaFS
We choose an Object Store direct mount (see the option obs_direct).
pdsh mount -t wekafs -o net=eth1,obs_direct [weka backend IP]/archive /mnt/archive
To simulate load, we used to write random data to the object store in a 1M block size.
pdsh ‘fio –name=$HOSTNAME-fio –directory=/mnt/archive –numjobs=200 –size=500M –direct=1 –verify=0 –iodepth=1 –rw=write –bs=1M’
Once the write workload completes, notice that only 2.46 GB of data resides on the SSD tier (this is all metadata), and 631.6 GB resides on BLOB storage.
Figure 5 – SSD Tier used for Metadata only
Double checking the file system using the Weka fs command. The used SSD capacity remains at 2.46 GB which is the size of our metadata.
Figure 6 – SSD Tier used for Metadata only.
Now that all the data resides on BLOB, lets measure how quickly it can be accessed.
We’ll benchmark our performance with FIO. We’ll run load testing across all six of our clients. Each client will be reading in 1MB block sizes.
pdsh ‘fio –name=$HOSTNAME-fio –directory=/mnt/archive –numjobs=200 –size=500M –direct=1 –verify=0 –iodepth=1 –rw=read –bs=1M –time_based –runtime=90’’
The command is configured to run for 90 seconds so we can capture the sustained bandwidth from the hot blob tier of the WEKA data platform.
From the screenshot below (Figure 7), observe that we are reading data from Azure Blob at speeds up to 20 GB/s.
Figure 7 – 19.63 GB/s 100% reads coming directly from BLOB
How does WEKA do it?
Simple answer…even load distribution across all nodes in the cluster. Each WEKA compute process establishes 64 threads to run GET operations from the Blob container. Each WEKA backend is responsible for an equal portion of the namespace, and each will perform the appropriate API operation from the Azure Blob.
Thus, Multiple nodes working together to process 64 threads each equals a term I will call “WEKA Accelerated HOT BLOB Tier”
Looking at the stats on the command line while the test was running (Figure 8), you can observe the distribution of servicing the tiered data is fully balanced across all the WEKA nodes in the cluster. This balance helps WEKA achieve its optimal performance from Azure Blob.
Figure 8 – Balanced backend nodes with 64 threads each for GET operations from BLOB
What real world problems can we solve with this feature?
1 – When one needs to ingest large volumes of data at once into the WEKA Azure platform. If the end user does not know what files will be “hot”, they can have it all reside directly on BLOB storage so that it doesn’t force any currently active data out of the flash tier.
2 – Running workloads that need to sequentially read large volumes of data infrequently. For example, an HPC job where the data is only used once a month or once a quarter. If each compute node reads a different subset of the data, there is no value to be gained from rehydrating the data into the flash tier / displacing data that is used repeatedly.
3 – Running read-intensive workloads where weka accelerated BLOB cold read performance is satisfactory. Clients can mount the file system in obs direct mode.
Conclusion
WEKA in Azure delivers exceptional performance for data-intensive workloads by leveraging parallelism, scalability, flash optimization, data tiering, & caching features. This enables organizations to achieve high throughput, low latency, and optimal resource utilization for their most demanding applications and use cases.
You can also add low latency high throughput reads directly from Hot Blob Storage as another use case. To quote from Kent one last time:
…..As the digital landscape continues to evolve, embracing the WEKA Data Platform is not just a smart choice; it’s a strategic advantage that empowers you to harness the full potential of your HPC Grid.
Reference:
Microsoft Tech Community – Latest Blogs –Read More
Issues with Identifying “Signal Copy” Blocks in Simulink Model Using find_system
I am working on a Simulink model where I need to identify all the "Signal Copy" blocks and replace them with direct connections between their source and destination blocks.
However, I am having trouble finding these blocks using the ‘find_system’ function.
signalCopies = find_system(model, ‘BlockType’, ‘SignalCopy’);
Unfortunately, this command returns nothing, even though there are "Signal Copy" blocks present in the model.I am working on a Simulink model where I need to identify all the "Signal Copy" blocks and replace them with direct connections between their source and destination blocks.
However, I am having trouble finding these blocks using the ‘find_system’ function.
signalCopies = find_system(model, ‘BlockType’, ‘SignalCopy’);
Unfortunately, this command returns nothing, even though there are "Signal Copy" blocks present in the model. I am working on a Simulink model where I need to identify all the "Signal Copy" blocks and replace them with direct connections between their source and destination blocks.
However, I am having trouble finding these blocks using the ‘find_system’ function.
signalCopies = find_system(model, ‘BlockType’, ‘SignalCopy’);
Unfortunately, this command returns nothing, even though there are "Signal Copy" blocks present in the model. sil testing MATLAB Answers — New Questions
The Excavator Hydraulic Jack Forces are Changing Direction in Simscape Multibody
I am trying to model a mechanism of excavators in the Simscape multibody.
I am using prismatic joint to simulate the functionality of hydraulic jacks with the following settings:
I motion the hydraulic jacks (using PS Ramp bock) and track the sensed "actuator force" parameter via a scope block.
So technically, I am doing an Inverse dynamic analysis to calculate the needed force on the hydraulic jacks and because during my simulation, all of the jacks (modeled with Prismatic joints) are getting opened, I am expecting that all the actuator forces be positive; however, the sign of force is changing over the simulation (!) which I believe it is wrong as I set to the motion of prismatic joints in one direction and the actuator force should be along this direction too. So I was wondering, why this is happening?!I am trying to model a mechanism of excavators in the Simscape multibody.
I am using prismatic joint to simulate the functionality of hydraulic jacks with the following settings:
I motion the hydraulic jacks (using PS Ramp bock) and track the sensed "actuator force" parameter via a scope block.
So technically, I am doing an Inverse dynamic analysis to calculate the needed force on the hydraulic jacks and because during my simulation, all of the jacks (modeled with Prismatic joints) are getting opened, I am expecting that all the actuator forces be positive; however, the sign of force is changing over the simulation (!) which I believe it is wrong as I set to the motion of prismatic joints in one direction and the actuator force should be along this direction too. So I was wondering, why this is happening?! I am trying to model a mechanism of excavators in the Simscape multibody.
I am using prismatic joint to simulate the functionality of hydraulic jacks with the following settings:
I motion the hydraulic jacks (using PS Ramp bock) and track the sensed "actuator force" parameter via a scope block.
So technically, I am doing an Inverse dynamic analysis to calculate the needed force on the hydraulic jacks and because during my simulation, all of the jacks (modeled with Prismatic joints) are getting opened, I am expecting that all the actuator forces be positive; however, the sign of force is changing over the simulation (!) which I believe it is wrong as I set to the motion of prismatic joints in one direction and the actuator force should be along this direction too. So I was wondering, why this is happening?! simscape, inverse dynamic, prismatic joint, actuated force, hydraulic jack MATLAB Answers — New Questions