Month: September 2024
Microsoft Attack Simulator Training Foreign Language
I need some help in the ability to change the Microsoft Attack Simulator Video training from the default of English to a foreign language. The chosen video training does support the language, but I have been unsuccessful in finding the setting in activating the foreign language.
I need some help in the ability to change the Microsoft Attack Simulator Video training from the default of English to a foreign language. The chosen video training does support the language, but I have been unsuccessful in finding the setting in activating the foreign language. Read More
licença necessária
boa noite, hoje eu fui usar a função histórico de ações no Excel do meu pc, mais sempre da bloqueado, e quando eu vou ver aparece que preciso de licença, vocês sabem o que isso significa? espero que possam me ajudar.
boa noite, hoje eu fui usar a função histórico de ações no Excel do meu pc, mais sempre da bloqueado, e quando eu vou ver aparece que preciso de licença, vocês sabem o que isso significa? espero que possam me ajudar. Read More
2 Stocks missing from Stocks Data type in excel
Two stock are missing from the Stocks data type.
1. Premier Energies Ltd listed on the National Stock Exchange (NSE) of India & Bombay Stock Exchange (BSE). XNSE:PREMIERENE.
2. Bajaj Housing Finance Ltd. listed on the same exchanges as above.
Both are newly listed stocks. First one listed on 3rd September 2024. Second listed today i.e. 16th Spetember 2024.
How to get these added ? I have given feedback on the 1st one multiple times.
Where to log a request ?
Thanks.
Two stock are missing from the Stocks data type. 1. Premier Energies Ltd listed on the National Stock Exchange (NSE) of India & Bombay Stock Exchange (BSE). XNSE:PREMIERENE. 2. Bajaj Housing Finance Ltd. listed on the same exchanges as above. Both are newly listed stocks. First one listed on 3rd September 2024. Second listed today i.e. 16th Spetember 2024. How to get these added ? I have given feedback on the 1st one multiple times.Where to log a request ? Thanks. Read More
Switch to Azure Business Continuity Center for your at scale BCDR management needs
In response to the evolving customer requirements and environments since COVID-19, including the shift towards hybrid work models and the increase in ransomware attacks, we have observed a growing trend among customers to invest in multiple vendors for data protection. To address these needs, we have developed the Azure Business Continuity (ABC) Center, a streamlined, centralized management center that simplifies backup and disaster recovery across various environments (Azure, Hybrid) and solutions (Azure Backup and Azure Site Recovery). Below are few resources to learn more about Azure Business Continuity Center:
Business Continuity with ABCC: Part 4: optimize security configuration – Microsoft Community Hub
Business Continuity with ABCC: Part 5: Monitoring protection – Microsoft Community Hub
ABCC, currently in public preview since November 2023, is designed as an enhanced version of the Backup Center and will eventually replace it. Getting started is simple, with no prerequisites or costs involved. Even if you’ve been using Backup Center, no additional action is needed to begin viewing your protection estate in Azure Business Continuity Center. To start with , simply navigate to Azure portal and search for Azure Business Continuity Center.
Azure Business Continuity Center (ABCC) providers enhanced experiences for business continuity, and we want our customers to adapt to it before it replaces the Backup Center. To support this transition, we have removed the Backup Center from the global search in the Azure portal, bust there is still option available from ABCC to go to Backup Center.
Backup Center will no longer appear in the Azure Portal search results across all regions. We encourage you to explore the Azure Business Continuity Center (ABCC) for your BCDR journey and provide your valuable feedback to help us enhance it to better meet your needs.
If you still want to launch Backup center, you can first go to Azure Business Continuity Center, from the Azure portal search.
Then, from ABCC help menu, kindly select “Go to Backup Center”.
If you are transitioning to the Backup Center, please share your reasons for doing so, including any missing capabilities, performance issues, or other concerns you may have encountered. Your insights are invaluable in helping us enhance the ABCC experience.
Microsoft Tech Community – Latest Blogs –Read More
Enhancing Retrieval-Augmented Generation with a Multimodal Knowledge Extraction and Retrieval System
The rapid evolution of AI has led to powerful tools for knowledge retrieval and question-answering systems, particularly with the rise of Retrieval-Augmented Generation (RAG) systems. This blog post introduces my capstone project, created as part of the IXN program at UCL in collaboration with Microsoft, aimed at enhancing RAG systems by integrating multimodal knowledge extraction and retrieval capabilities. The system enables AI agents to process both textual and visual data, offering more accurate and contextually relevant responses. In this post, I’ll walk you through the project’s goals, development journey, technical implementation, and outcomes.
Project Overview
The main goal of this project was to improve the performance of RAG systems by refining how multimodal data is extracted, stored, and retrieved. Current RAG systems primarily rely on text-based data, which limits their ability to generate accurate responses when queries require a combination of text and images. To address this, I developed a system capable of extracting, processing, and retrieving multimodal data from Wikimedia, allowing AI agents to generate more accurate, grounded and contextually relevant answers.
Key features include:
Multimodal Knowledge Extraction: Data from Wikimedia (text, images, tables) is preprocessed, run through the transformation pipeline, and stored in vector and graph databases for efficient retrieval.
Dynamic Knowledge Retrieval: A custom query engine, combined with an agentic approach using the ReAct agent, ensures flexible and accurate retrieval of information by dynamically selecting the best tools and strategies for each query.
The project began by addressing the limitations of existing RAG systems, particularly their difficulties with handling visual data and delivering accurate responses. After reviewing various technologies, a system architecture was developed to support both text and image data. Throughout the process, components were refined to ensure compatibility between LlamaIndex, Qdrant, and Neo4j, while optimising performance for managing large datasets. The primary challenges lay in handling the large volumes of data from Wikimedia, especially the processing of images, and refactoring the system for Dockerisation. These challenges were met through iterative improvements to the system architecture, ensuring efficient multimodal data handling and reliable deployment across environments.
Implementation Overview
This project integrates both textual and visual data to enhance RAG systems’ retrieval and response generation. The system’s architecture is split into two main processes:
Knowledge Extraction: Data is fetched from Wikimedia and transformed into embeddings for text and images. These embeddings are stored in Qdrant for efficient retrieval, while Neo4j captures the relationships between the nodes, ensuring the preservation of data structure.
Knowledge Retrieval: A dynamic query engine processes user queries, retrieving data from both Qdrant (using vector search) and Neo4j (via graph traversal). Advanced techniques like query expansion, reranking, and cross-referencing ensure the most relevant information is returned.
Tech Stack
The following technologies were used to build and deploy the system:
Python: Core programming language for data pipelines
LlamaIndex: Framework for indexing, transforming, and retrieving multimodal data
Qdrant: Vector database for similarity searches based on embeddings
Neo4j: Graph database used to store and manage relationships between data entities
Azure OpenAI (GPT-4O): Used for handling multimodal inputs, deploying models via Azure App Services
Text Embedding Ada-002: Model for generating text embeddings
Azure Computer Vision: Used for generating image embeddings
Gradio: Provides an interactive interface for querying the system
Docker and Docker Compose: Used for containerization and orchestration, ensuring consistent deployment
Implementation Details
Multimodal Knowledge Extraction
The system starts by fetching both textual and visual data from Wikimedia, using the Wikimedia API and web scraping techniques. Then the key steps in knowledge extraction implementation are:
Data Preprocessing: Text is cleaned, images are classified into categories such as plots or images for appropriate handling during later transformations, and tables are structured for easier processing.
Node Creation and Transformation: Initial LlamaIndex nodes are created from this data, which then undergo several transformations through the transformation pipeline using GPT-4O model deployed via Azure OpenAI:
Text and Table Transformations: Text data is cleaned, split into smaller chunks using semantic chunking, and new derived nodes are created from various transformations, like key entity extraction or table analysis. Each node has a unique Llamaindex ID and carries metadata such as title, context, and relationships reflecting the hierarchical structure of the Wikimedia page and parent-child relationships with new transformed nodes.
Image Transformations: Images are processed to generate descriptions, perform plot analysis, and identify key objects based on the image type, resulting in the creation of new text nodes.
Embeddings Generation: The last stage of the pipeline is to generate embeddings for images and transformed text nodes:
Text Embeddings: Generated using the text-embedding-ada-002 model deployed with Azure OpenAI on Azure App Services.
Image Embeddings: Generated using the Azure Computer Vision service.
Storage: Both text and image embeddings are stored in Qdrant with reference node IDs in the payload for fast retrieval. The full nodes and their relationships are stored in Neo4j:
Knowledge Retrieval
The retrieval process involves several key steps:
Query Expansion: The system generates multiple variations of the original query, expanding the search space to capture relevant data.
Vector Search: The expanded queries are passed to Qdrant for a similarity-based search using cosine similarity.
Reranking and Cross-Retrieval: Results are then reranked by relevance. Retrieved nodes from Qdrant contain LlamaIndex node IDs in the payload. These are used to fetch the nodes from Neo4j and then to get the nodes with original data from Wikimedia by traversing the graph, ensuring the final response is based only on original Wikipedia content.
ReAct Agent Integration: The ReAct agent dynamically manages the retrieval process by selecting tools based on the query context. It integrates with the custom-built query engine to balance AI-generated insights with the original data from Neo4j and Qdrant.
Dockerization with Docker Compose
To ensure consistent deployment across different environments, the entire application is containerised using Docker. Docker Compose orchestrates multiple containers, including the knowledge extractor, retriever, Neo4j, and Qdrant services. This setup simplifies the deployment process and enhances scalability.
Results and Outcomes
The system effectively enhances the grounding and accuracy of responses generated by RAG systems. By incorporating multimodal data, it delivers contextually relevant answers, particularly in scenarios where visual information was critical. The integration of Qdrant and Neo4j proved to be highly efficient, enabling fast retrieval and accurate results.
Additionally, a user-friendly interface built with Gradio allows users to interact with the system and compare the AI-generated responses with standard LLM output, offering an easy way to evaluate the improvements.
Here is a snapshot of the Gradio UI:
Future Development
Several directions for future development have been identified to further enhance the system’s capabilities:
Agentic Framework Expansion: A future version of the system could incorporate an autonomous tool capable of determining whether the existing knowledge base is sufficient for a query. If the knowledge base is found lacking, the system could automatically initiate a knowledge extraction process to address the gap. This enhancement would bring greater adaptability and self-sufficiency to the system.
Knowledge Graph with Entities: Expanding the knowledge graph to include key entities such as individuals, locations, and events or others appropriate for the domain. This would add considerable depth and precision to the retrieval process. The integration of such entities would provide a more comprehensive and interconnected knowledge base, improving both the relevance and accuracy of results.
Enhanced Multimodality: Future iterations could expand the system’s capabilities in handling image data. This may include adding support for image comparison, object detection, or breaking images down into distinct components. Such features would enable more sophisticated queries and increase the system’s versatility in handling diverse data formats.
Incorporating these advancements will position the system to play an important role in the evolving field of multimodal AI, further bridging the gap between text and visual data integration in knowledge retrieval.
Summary
This project demonstrates the potential of enhancing RAG systems by integrating multimodal data, allowing AI to process both text and images more effectively. Through the use of technologies like LlamaIndex, Qdrant, and Neo4j, the system delivers more grounded, contextually relevant answers at high speed. With a focus on accurate knowledge retrieval and dynamic query handling, the project showcases a significant advancement in AI-driven question-answering systems. For more insights and to explore the project, please visit the GitHub repository.
If you’d like to connect, feel free to reach out to me on LinkedIn.
Microsoft Tech Community – Latest Blogs –Read More
Discover the Hub Page Every JavaScript Developer Needs to Know at Microsoft!
Did you know that Microsoft offers an exclusive Hub Page just for JavaScript developers? JavaScript at Microsoft brings everything you need into one place to start building apps, learn more about JavaScript, and stay updated on the latest from Microsoft!
Let’s dive in and explore this incredible platform, and see how you can make the most of its resources!
What is JavaScript at Microsoft?
On JavaScript at Microsoft, you’ll find practical tutorials, detailed documentation, code samples using Azure, and so much more! Whether you’re a beginner or a seasoned developer, this platform is designed to support and speed up your learning and development, helping you get the most out of JavaScript-related technologies.
What will you find on JavaScript at Microsoft?
There are tons of exciting resources on this portal! What’s great is that everything is super organized and centralized, so you can quickly find all the info you need about the JavaScript world at Microsoft.
Let’s take a closer look at what you can find on JavaScript at Microsoft:
Serverless ChatGPT with RAG using LangChain.js
Right at the top of the page, you’ll find the latest videos, tutorials, articles, and even code samples like the Serverless AI Chat with RAG using LangChain.js. This is an app where you’ll learn how to create your own serverless ChatGPT using the Retrieval-Augmented Generation (RAG) technique with LangChain.js. You can run it locally with Ollama and Mistral, or deploy it on Azure in just a few minutes, using your own data.
We highly recommend exploring this awesome example! There’s so much to learn, and who knows, it might inspire you to create your own version of a chatbot with JavaScript! Fork the project right now and drop a star ⭐!
Videos and Series on JavaScript + Azure
In the video section, you’ll find a range of content on how to use JavaScript with Azure. These videos vary from short tutorials to longer talks, from 30 to 45 minutes, showing you how to build amazing applications with JavaScript and Azure.
For example, this year, we had the JavaScript Developer Day with lots of amazing talks from Microsoft experts and the technical community, covering how you can use JavaScript with different Azure services! Some standout sessions include:
Building a versatile RAG Pattern chat bot with Azure OpenAI, LangChain | JavaScript Dev Day
LangChain.js + Azure: A Generative AI App Journey | JavaScript Dev Day
GitHub Copilot Can Do That? | JavaScript Dev Day
JavaScript + Azure Code Samples and Open Source Projects
In this section, you’ll find a variety of open-source projects that you can contribute to! Many of these projects are maintained by the JavaScript Advocacy and Developer Division teams at Microsoft. They’re aimed at enterprise use and follow the best development practices in JavaScript! Dive into these projects, experiment, and help us improve them with your contributions!
Tutorials and More Videos!
In the tutorials section, you’ll find a wide variety of video tutorials covering different needs. From using the Visual Studio Code debugger to deploying apps on Azure Static Web Apps.
Here are some examples of tutorials you’ll find:
End-to-end browser debugging of your Azure Static Web Apps with Visual Studio Code
Azure libraries packages for JavaScript
Introduction to Playwright: What is Playwright?
Deploy React websites to the cloud with Azure Static Web Apps
Workshops and Documentation
Finally, you’ll find various workshops and official documentation on how to use JavaScript with Azure and other Microsoft technologies.
On this hub, you’ll find workshops like:
Microservices in practice with Node.js, Docker and Azure
LAB: Build a serverless web application end-to-end on Microsoft Azure
Create your own ChatGPT with Retrieval-Augmented-Generation
Build JavaScript applications with Node.js
Conclusion
JavaScript at Microsoft is the complete portal for anyone who wants to learn more about JavaScript and how to use it with Microsoft technologies. So, if you’re looking to dive deeper into JavaScript, Azure, TypeScript, Artificial Intelligence, Testing, and more, be sure to check out the portal and explore all the resources available!
I hope you enjoyed this article and that it inspires you to explore more about JavaScript at Microsoft! If you have any questions or suggestions, feel free to leave a comment below! 😎
Microsoft Tech Community – Latest Blogs –Read More
The New Microsoft 365 Photo Update Settings Policy for User Profile Photos
Photo Update Settings Policy is Long-term Unified Replacement for Other Controls
Given the historical foundation of Microsoft 365 in several on-premises applications, it probably wasn’t surprising that we ended up with a confusing mish-mash of routes by which it was possible to update the profile photos for user accounts through SharePoint, Exchange, Teams, Delve, PowerShell, and so on. Looking back, it took a surprising amount of time before Microsoft acknowledged that the situation was untenable.
A new approach that worked across Microsoft 365 was necessary. That process began in October 2023 with the retirement of the Exchange Online cmdlets to update photos for mailboxes. The foundation for the new approach was a set of Graph APIs surfaced as cmdlets in the Microsoft Graph PowerShell SDK, like Set-MgUserPhotoContent.
A New Photo Update Settings Policy to Control User Profile Updates
In June 2024, Microsoft introduced a new Entra ID policy based on the photoUpdateSettings resource to control who can update photos and the allowed sources for updates. Managing the photo update settings policy requires the PeopleSettings.ReadWrite.all scope. The settings for a tenant can be retrieved as follows:
$Uri = “https://graph.microsoft.com/beta/admin/people/photoupdatesettings”
$Settings = Invoke-MgGraphrequest -Uri $Uri -Method Get
$Settings
Name Value
—- —–
allowedRoles {}
@odata.context https://graph.microsoft.com/beta/$metadata#admin/people/photoUpdateSettings/$entity
Source
The settings shown above are the default. The supported values are described in the photoUpdateSettings documentation.
Controlling From Where Photos Can Be Updated
The source for photo updates can be undefined, meaning that photo updates can be sourced from applications running in either the cloud or on-premises (synchronized to Entra ID from Active Directory). Alternatively, you can set the source to be either cloud or on-premises. For example, to update the settings so that photo changes are only possible through cloud applications, create a hash table with a single item to change the source to cloud and use the hash table as the payload to patch the policy:
$Body = @{}
$Body.Add(“Source”, “Cloud”)
$Settings = Invoke-MgGraphrequest -Uri $Uri -Method Patch -Body $Body
Like any update to an Entra ID policy, it can take 24 hours before the policy update is effective across a tenant.
Controlling Who Can Update Photos
By default, any user can update the photo for their account and the value for AllowedRoles is blank. If you want to restrict who can update photos, you can select one or more directory roles and include the GUIDs for these roles in the AllowedRoles property (a string collection).
The roles defined in AllowedRoles must hold the permission to set user photos. In Graph terms, these permissions are either microsoft.directory/users/photo/update or microsoft.directory/users/allProperties/allTasks (only held by the Global administrator role). The following roles can be used:
Directory writers (9360feb5-f418-4baa-8175-e2a00bac4301).
Intune administrator (3a2c62db-5318-420d-8d74-23affee5d9d5).
Partner Tier1 Support (4ba39ca4-527c-499a-b93d-d9b492c50246) – not intended for general use.
Partner Tier2 Support (e00e864a-17c5-4a4b-9c06-f5b95a8d5bd8) – not intended for general use
User administrator (fe930be7-5e62-47db-91af-98c3a49a38b1).
Global administrator (62e90394-69f5-4237-9190-012177145e10).
All are privileged roles, meaning that these are roles that enjoy a heightened level of access to sensitive information.
To update the photo settings policy to confine updates to specific roles, create a hash table to hold the GUIDs of the selected roles. Create a second hash table to hold the payload to update the settings and include the hash table with the roles. Finally, patch the policy.
$Roles = @{}
$Roles.Add(“62e90394-69f5-4237-9190-012177145e10”, $null)
$Roles.Add(“fe930be7-5e62-47db-91af-98c3a49a38b1”, $null)
$Body =@{}
$Body.Add(“allowedRoles”, $Roles)
$Settings = Invoke-MgGraphrequest -Uri $Uri -Method Patch -Body $Body
To reverse the restriction by removing the roles, run this code:
$Body = ‘{
“allowedRoles”: []
}’
$Settings = Invoke-MgGraphrequest -Uri $Uri -Method Patch -Body $Body
The result of limiting photo updates for user accounts to the user administrator and global administrator roles means that after the new policy percolates throughout the tenant, any account that doesn’t hold a specified role cannot change their profile photo.
The Teams client is probably the best example. The implementation here is not yet optimal. The block on photo updates imposed by an OWA mailbox policy causes Teams to inform the user that administrative restrictions stop photo updates. If the photo update settings policy restricts updates to specific roles, Teams allows the user to go through the process of selecting and uploading a photo before failing (Figure 1).
Figure 1: A failure to update a profile photo due to policy restrictions
An Early Implementation of the Photo Update Settings Policy
This kind of thing happens in the early stages of implementation. It will take time for Microsoft to update clients to allow and block profile updates based on the photo settings policy. And it will take time for tenants to move from the previous block imposed by OWA mailbox policies. In doing so, you’ll notice that the only restriction supported by the new policy is through roles. The OWA mailbox policy setting allows per-user control and multiple policies can exist within a tenant. We’re therefore heading to a less granular policy.
Maybe a less granular mechanism will be acceptable if it helps with the rationalization of photo updates across Microsoft 365. However, I can’t help thinking that this is a retrograde step. Perhaps Microsoft will address the need for more granular control through Entra ID administrative units, which seems to be the answer for this kind of requirement everywhere else in Entra ID.
Insight like this doesn’t come easily. You’ve got to know the technology and understand how to look behind the scenes. Benefit from the knowledge and experience of the Office 365 for IT Pros team by subscribing to the best eBook covering Office 365 and the wider Microsoft 365 ecosystem.
How to convert csv to Heatmap?
Hello, I am relatively new to using MATLAB so I am coming across some issues. I want to convert a .csv file with A LOT of raw data and visualize it as a heatmap. It is a 36189×88 data table. This code I have so far shows something but it is not really giving me what I want. Any help or advice would be greatly appreciated.
clc;
clear
close all;
data = readtable("N303_M.2.csv");
%% Data Filtering/Processing
clc, clearvars -except data, format compact, close all
%convert from Table to Array
strain(:, 1:88) = table2array(data(:,1:88));
%% Data Visualizations
figure(1);
plot(strain(:, 1:88));
heatmap(strain(:, 1:88),’Colormap’, jet,)
title(‘Visual Data for Mat’)Hello, I am relatively new to using MATLAB so I am coming across some issues. I want to convert a .csv file with A LOT of raw data and visualize it as a heatmap. It is a 36189×88 data table. This code I have so far shows something but it is not really giving me what I want. Any help or advice would be greatly appreciated.
clc;
clear
close all;
data = readtable("N303_M.2.csv");
%% Data Filtering/Processing
clc, clearvars -except data, format compact, close all
%convert from Table to Array
strain(:, 1:88) = table2array(data(:,1:88));
%% Data Visualizations
figure(1);
plot(strain(:, 1:88));
heatmap(strain(:, 1:88),’Colormap’, jet,)
title(‘Visual Data for Mat’) Hello, I am relatively new to using MATLAB so I am coming across some issues. I want to convert a .csv file with A LOT of raw data and visualize it as a heatmap. It is a 36189×88 data table. This code I have so far shows something but it is not really giving me what I want. Any help or advice would be greatly appreciated.
clc;
clear
close all;
data = readtable("N303_M.2.csv");
%% Data Filtering/Processing
clc, clearvars -except data, format compact, close all
%convert from Table to Array
strain(:, 1:88) = table2array(data(:,1:88));
%% Data Visualizations
figure(1);
plot(strain(:, 1:88));
heatmap(strain(:, 1:88),’Colormap’, jet,)
title(‘Visual Data for Mat’) heatmap, csv, raw data, matlab MATLAB Answers — New Questions
How to callback same function for changes in multiple uiobjects?
I am creating a uifigure-based app programatically for de-noising the noisy images. The code for which is as follows:
function filteredImage
fig = uifigure;
g = uigridlayout(fig,[3 2]);
g.RowHeight = {30,30,’1x’};
g.ColumnSpacing = 10;
filterLabel = uilabel(g,"Text","Filter Type","WordWrap","on","FontSize",18,"FontWeight","bold");
filterLabel.Layout.Row = 1;
filterLabel.Layout.Column = 1;
% Choose type of filter to be displayed
filterType = uidropdown(g,"Editable","on","Position",[0 50 20 10],"FontSize",18);
filterType.Layout.Row = 1;
filterType.Layout.Column = 2;
filterType.Items = ["Arithmetic Mean Filter",
"Geometric Mean Filter",
"Weighted Average Filter"];
kernelDimlabel = uilabel("Parent",g,"Text","Kernel Dimensions","FontSize",18,"FontWeight","bold");
kernelDimlabel.Layout.Row = 2;
kernelDimlabel.Layout.Column = 1;
% Set the kernel size (since it is a square matrix, #rows = #columns)
kernelDim = uispinner("Parent",g,"Step",2,"Value",3,"Limits",[3 11],"FontSize",18);
kernelDim.Layout.Row = 2;
kernelDim.Layout.Column = 2;
% Noisy image displayed
im1 = uiimage(g,"ImageSource","poutsalt & pepper.png");
im1.Layout.Row = 3;
im1.Layout.Column = 1;
% Filtered image displayed
im2 = uiimage(g);
im2.Layout.Row = 3;
im2.Layout.Column = 2;
% Here, filterImage updates im2 whenever the value of filterType changes
filterType.ValueChangedFcn = @(src,event)filterImage(src,event,im1,im2,filterType,kernelDim);
% I want to update im2 when either the value of filterType changes OR when
% kernelDim changes
end
function filterImage(src,event,im1,im2,filterType,kernelDim)
kDim = kernelDim.Value;
I = imread(im1.ImageSource);
I_name = erase(im1.ImageSource,".png");
incr = floor(kDim/2);
im1_padded = double(padarray(I,[incr incr],0,’both’));
[r,c] = size(I);
switch filterType.Value
case "Arithmetic Mean Filter"
kernel = double(ones(kDim));
for i=1:r
for j=1:c
I_filtered(i+incr,j+incr) = (1/(kDim*kDim))*sum(sum(kernel.*im1_padded(i:i+kDim-1,j:j+kDim-1)));
end
end
ext = "_amf"+"_"+ string(kDim) + ".png";
case "Geometric Mean Filter"
kernel = double(ones(kDim));
for i=1:r
for j=1:c
I_filtered(i+incr,j+incr) = prod(prod(kernel.*im1_padded(i:i+kDim-1,j:j+kDim-1))).^(1/(kDim*kDim));
end
end
ext = "_gmf"+"_"+ string(kDim) + ".png";
case "Weighted Average Filter"
if kDim == 3
kernel = (1/16)*[1 2 1;
2 4 2;
1 2 1];
else
kernel = (1/52)*[1 1 2 1 1;
1 2 4 2 1;
2 4 8 4 2;
1 2 4 2 1;
1 1 2 1 1];
end
for i=1:r
for j=1:c
I_filtered(i+incr,j+incr) = sum(sum(kernel.*im1_padded(i:i+kDim-1,j:j+kDim-1)));
end
end
ext = "_waf"+"_"+ string(kDim) + ".png";
end
I_filtered = uint8(I_filtered);
filteredImagePath = strcat(I_name,ext);
imwrite(I_filtered,filteredImagePath);
im2.ImageSource = filteredImagePath;
end
As of now I am getting the filtered image output. Initially there’s no image in im2. But it appears only after I change the kernelDim followed by filterType. One obvious way is to explicitly write the code for initial values, but I want the function filterImage to somehow do it.
Please suggest techniques to do the same in an efficient manner.I am creating a uifigure-based app programatically for de-noising the noisy images. The code for which is as follows:
function filteredImage
fig = uifigure;
g = uigridlayout(fig,[3 2]);
g.RowHeight = {30,30,’1x’};
g.ColumnSpacing = 10;
filterLabel = uilabel(g,"Text","Filter Type","WordWrap","on","FontSize",18,"FontWeight","bold");
filterLabel.Layout.Row = 1;
filterLabel.Layout.Column = 1;
% Choose type of filter to be displayed
filterType = uidropdown(g,"Editable","on","Position",[0 50 20 10],"FontSize",18);
filterType.Layout.Row = 1;
filterType.Layout.Column = 2;
filterType.Items = ["Arithmetic Mean Filter",
"Geometric Mean Filter",
"Weighted Average Filter"];
kernelDimlabel = uilabel("Parent",g,"Text","Kernel Dimensions","FontSize",18,"FontWeight","bold");
kernelDimlabel.Layout.Row = 2;
kernelDimlabel.Layout.Column = 1;
% Set the kernel size (since it is a square matrix, #rows = #columns)
kernelDim = uispinner("Parent",g,"Step",2,"Value",3,"Limits",[3 11],"FontSize",18);
kernelDim.Layout.Row = 2;
kernelDim.Layout.Column = 2;
% Noisy image displayed
im1 = uiimage(g,"ImageSource","poutsalt & pepper.png");
im1.Layout.Row = 3;
im1.Layout.Column = 1;
% Filtered image displayed
im2 = uiimage(g);
im2.Layout.Row = 3;
im2.Layout.Column = 2;
% Here, filterImage updates im2 whenever the value of filterType changes
filterType.ValueChangedFcn = @(src,event)filterImage(src,event,im1,im2,filterType,kernelDim);
% I want to update im2 when either the value of filterType changes OR when
% kernelDim changes
end
function filterImage(src,event,im1,im2,filterType,kernelDim)
kDim = kernelDim.Value;
I = imread(im1.ImageSource);
I_name = erase(im1.ImageSource,".png");
incr = floor(kDim/2);
im1_padded = double(padarray(I,[incr incr],0,’both’));
[r,c] = size(I);
switch filterType.Value
case "Arithmetic Mean Filter"
kernel = double(ones(kDim));
for i=1:r
for j=1:c
I_filtered(i+incr,j+incr) = (1/(kDim*kDim))*sum(sum(kernel.*im1_padded(i:i+kDim-1,j:j+kDim-1)));
end
end
ext = "_amf"+"_"+ string(kDim) + ".png";
case "Geometric Mean Filter"
kernel = double(ones(kDim));
for i=1:r
for j=1:c
I_filtered(i+incr,j+incr) = prod(prod(kernel.*im1_padded(i:i+kDim-1,j:j+kDim-1))).^(1/(kDim*kDim));
end
end
ext = "_gmf"+"_"+ string(kDim) + ".png";
case "Weighted Average Filter"
if kDim == 3
kernel = (1/16)*[1 2 1;
2 4 2;
1 2 1];
else
kernel = (1/52)*[1 1 2 1 1;
1 2 4 2 1;
2 4 8 4 2;
1 2 4 2 1;
1 1 2 1 1];
end
for i=1:r
for j=1:c
I_filtered(i+incr,j+incr) = sum(sum(kernel.*im1_padded(i:i+kDim-1,j:j+kDim-1)));
end
end
ext = "_waf"+"_"+ string(kDim) + ".png";
end
I_filtered = uint8(I_filtered);
filteredImagePath = strcat(I_name,ext);
imwrite(I_filtered,filteredImagePath);
im2.ImageSource = filteredImagePath;
end
As of now I am getting the filtered image output. Initially there’s no image in im2. But it appears only after I change the kernelDim followed by filterType. One obvious way is to explicitly write the code for initial values, but I want the function filterImage to somehow do it.
Please suggest techniques to do the same in an efficient manner. I am creating a uifigure-based app programatically for de-noising the noisy images. The code for which is as follows:
function filteredImage
fig = uifigure;
g = uigridlayout(fig,[3 2]);
g.RowHeight = {30,30,’1x’};
g.ColumnSpacing = 10;
filterLabel = uilabel(g,"Text","Filter Type","WordWrap","on","FontSize",18,"FontWeight","bold");
filterLabel.Layout.Row = 1;
filterLabel.Layout.Column = 1;
% Choose type of filter to be displayed
filterType = uidropdown(g,"Editable","on","Position",[0 50 20 10],"FontSize",18);
filterType.Layout.Row = 1;
filterType.Layout.Column = 2;
filterType.Items = ["Arithmetic Mean Filter",
"Geometric Mean Filter",
"Weighted Average Filter"];
kernelDimlabel = uilabel("Parent",g,"Text","Kernel Dimensions","FontSize",18,"FontWeight","bold");
kernelDimlabel.Layout.Row = 2;
kernelDimlabel.Layout.Column = 1;
% Set the kernel size (since it is a square matrix, #rows = #columns)
kernelDim = uispinner("Parent",g,"Step",2,"Value",3,"Limits",[3 11],"FontSize",18);
kernelDim.Layout.Row = 2;
kernelDim.Layout.Column = 2;
% Noisy image displayed
im1 = uiimage(g,"ImageSource","poutsalt & pepper.png");
im1.Layout.Row = 3;
im1.Layout.Column = 1;
% Filtered image displayed
im2 = uiimage(g);
im2.Layout.Row = 3;
im2.Layout.Column = 2;
% Here, filterImage updates im2 whenever the value of filterType changes
filterType.ValueChangedFcn = @(src,event)filterImage(src,event,im1,im2,filterType,kernelDim);
% I want to update im2 when either the value of filterType changes OR when
% kernelDim changes
end
function filterImage(src,event,im1,im2,filterType,kernelDim)
kDim = kernelDim.Value;
I = imread(im1.ImageSource);
I_name = erase(im1.ImageSource,".png");
incr = floor(kDim/2);
im1_padded = double(padarray(I,[incr incr],0,’both’));
[r,c] = size(I);
switch filterType.Value
case "Arithmetic Mean Filter"
kernel = double(ones(kDim));
for i=1:r
for j=1:c
I_filtered(i+incr,j+incr) = (1/(kDim*kDim))*sum(sum(kernel.*im1_padded(i:i+kDim-1,j:j+kDim-1)));
end
end
ext = "_amf"+"_"+ string(kDim) + ".png";
case "Geometric Mean Filter"
kernel = double(ones(kDim));
for i=1:r
for j=1:c
I_filtered(i+incr,j+incr) = prod(prod(kernel.*im1_padded(i:i+kDim-1,j:j+kDim-1))).^(1/(kDim*kDim));
end
end
ext = "_gmf"+"_"+ string(kDim) + ".png";
case "Weighted Average Filter"
if kDim == 3
kernel = (1/16)*[1 2 1;
2 4 2;
1 2 1];
else
kernel = (1/52)*[1 1 2 1 1;
1 2 4 2 1;
2 4 8 4 2;
1 2 4 2 1;
1 1 2 1 1];
end
for i=1:r
for j=1:c
I_filtered(i+incr,j+incr) = sum(sum(kernel.*im1_padded(i:i+kDim-1,j:j+kDim-1)));
end
end
ext = "_waf"+"_"+ string(kDim) + ".png";
end
I_filtered = uint8(I_filtered);
filteredImagePath = strcat(I_name,ext);
imwrite(I_filtered,filteredImagePath);
im2.ImageSource = filteredImagePath;
end
As of now I am getting the filtered image output. Initially there’s no image in im2. But it appears only after I change the kernelDim followed by filterType. One obvious way is to explicitly write the code for initial values, but I want the function filterImage to somehow do it.
Please suggest techniques to do the same in an efficient manner. image-processing, uifigure MATLAB Answers — New Questions
2024-09 Cumulative Update for Windows 10 Version 22H2 for x64-based Systems (KB5043064)
I can’t install this, it quits at 7%, any info on what to do. Thanks. I cant get rid of it.
I can’t install this, it quits at 7%, any info on what to do. Thanks. I cant get rid of it. Read More
Automatic Task Assignment Based on Schedule
I kindly request assistance with creating a formula.
I have a an employee table with employee names and start times (A1:B5). I have a second table with the day’s tasks start times and their end times (D1:E17), all being 30 minutes to complete. I have a running count (G1:G17) of how many times an employee completed a task as the employee can only complete 4 total tasks and taken out of rotation and cannot be assigned anymore.
I need a result like in H1:H17 that follows the schedule and assigns a free employee (column A) based on their start times (column B), the 30 minutes to complete the task (column D and E), and on their running count of no more than 4 (column G)
I kindly request assistance with creating a formula. I have a an employee table with employee names and start times (A1:B5). I have a second table with the day’s tasks start times and their end times (D1:E17), all being 30 minutes to complete. I have a running count (G1:G17) of how many times an employee completed a task as the employee can only complete 4 total tasks and taken out of rotation and cannot be assigned anymore. I need a result like in H1:H17 that follows the schedule and assigns a free employee (column A) based on their start times (column B), the 30 minutes to complete the task (column D and E), and on their running count of no more than 4 (column G) Read More
Time zone/scheduled times changing for some reminder emails
Hi everyone,
My work team are using the Bookings app to manage training session bookings and registrations. Everything is set up in line with the MS Bookings guides. We have reminder emails set up to auto send 1 day before the session starts.
We have recently been notified that some of our reminder emails are advising of the incorrect time and stating UTC Coordinated Universal Time.
Our region and time zone settings are correct (UTC+10 Brisbane), and we have ticked Always show time slots in business time zone, however this shouldn’t be an issue as all of our current attendees are located within the same time zone.
The initial booking information and confirmation email states the correct time of the session, and majority of the reminder emails are correct, it’s just some of them that change the time. It’s not always the same session type either, or for everyone who has registered, it’s very sporadic. E.g. Two people registered for the same session, one reminder emails state the correct time, the other receives one that says the session is 12:30am-01:00am when the correct time is 10:30am-11:00am on the same day.
We have searched thoroughly through the Bookings system and are unable to find a resolution.
We have also checked with one of our attendees who received an incorrect reminder to check their time settings in Teams and 365 and all are correct.
Any information or how to resolve this would be most welcome.
Thanks,
Cara
Hi everyone,My work team are using the Bookings app to manage training session bookings and registrations. Everything is set up in line with the MS Bookings guides. We have reminder emails set up to auto send 1 day before the session starts.We have recently been notified that some of our reminder emails are advising of the incorrect time and stating UTC Coordinated Universal Time.Our region and time zone settings are correct (UTC+10 Brisbane), and we have ticked Always show time slots in business time zone, however this shouldn’t be an issue as all of our current attendees are located within the same time zone.The initial booking information and confirmation email states the correct time of the session, and majority of the reminder emails are correct, it’s just some of them that change the time. It’s not always the same session type either, or for everyone who has registered, it’s very sporadic. E.g. Two people registered for the same session, one reminder emails state the correct time, the other receives one that says the session is 12:30am-01:00am when the correct time is 10:30am-11:00am on the same day.We have searched thoroughly through the Bookings system and are unable to find a resolution.We have also checked with one of our attendees who received an incorrect reminder to check their time settings in Teams and 365 and all are correct.Any information or how to resolve this would be most welcome. Thanks,Cara Read More
Graph API for accessing Mail Info
Hi PowerShell Community,
I need to access the info below
Total Numbers of email by each mailboxTotal Attachments per email by each mailboxType of attachments (PDF, Word, Excel, PPT, Image, Etc..)Total email, each mailboxLast access
For which i need to know which Graph API I need for the info mentioned above. Can someone please help us in finding this.
Hi PowerShell Community, I need to access the info below Total Numbers of email by each mailboxTotal Attachments per email by each mailboxType of attachments (PDF, Word, Excel, PPT, Image, Etc..)Total email, each mailboxLast accessFor which i need to know which Graph API I need for the info mentioned above. Can someone please help us in finding this. Read More
Embedded Coder for STM32F446RE about CAN communication
Hello Matlab Support Team,
I am currently using Simulink to implement CAN communication with an STM32-NucleoF446RE board, but I am facing some difficulties and would like to request support.
At the moment, I have connected an MCU-230 transceiver to a 500 kbit/s CAN bus, with the RX-TX lines of the transceiver connected to the STM32 board. I have also ensured that the baud rate is set correctly, but the problem of not receiving any values through CAN Receive persists.
Attached are some images showing parts of the code I have written and screenshots from STM32 IDE.
If there is any information you need, feel free to ask, and I will provide it. Also, if I have made any mistakes, please don’t hesitate to contact me.Hello Matlab Support Team,
I am currently using Simulink to implement CAN communication with an STM32-NucleoF446RE board, but I am facing some difficulties and would like to request support.
At the moment, I have connected an MCU-230 transceiver to a 500 kbit/s CAN bus, with the RX-TX lines of the transceiver connected to the STM32 board. I have also ensured that the baud rate is set correctly, but the problem of not receiving any values through CAN Receive persists.
Attached are some images showing parts of the code I have written and screenshots from STM32 IDE.
If there is any information you need, feel free to ask, and I will provide it. Also, if I have made any mistakes, please don’t hesitate to contact me. Hello Matlab Support Team,
I am currently using Simulink to implement CAN communication with an STM32-NucleoF446RE board, but I am facing some difficulties and would like to request support.
At the moment, I have connected an MCU-230 transceiver to a 500 kbit/s CAN bus, with the RX-TX lines of the transceiver connected to the STM32 board. I have also ensured that the baud rate is set correctly, but the problem of not receiving any values through CAN Receive persists.
Attached are some images showing parts of the code I have written and screenshots from STM32 IDE.
If there is any information you need, feel free to ask, and I will provide it. Also, if I have made any mistakes, please don’t hesitate to contact me. can, stm32f4xx, embedded coder, simulink MATLAB Answers — New Questions
can some one tell me how to do the aspen plus excel matlab link ??
I need to use a user model subroutine in Aspen plus linked to Matlab and excel can some one tell me the procedure step by step please….I need to use a user model subroutine in Aspen plus linked to Matlab and excel can some one tell me the procedure step by step please…. I need to use a user model subroutine in Aspen plus linked to Matlab and excel can some one tell me the procedure step by step please…. aspen plus- excel- matlab link MATLAB Answers — New Questions
Simulink Desktop Real-Time issue under Matlab R2024a
I am trying to run SLDRT models with Matlab/Simulink R2024a.
The models were functionnal under Matlab R2023a
In Matlab/Simulink R2024a, the analog inputs of my NI-PCI 6321 card update at a very slow pace and the analog outputs never update.
To ensure that this is not a Windows 11 issue, I reverted back to Matlab/Simulink R2023a and the old models still worked fine.
Is there a real issue or am I missing something?I am trying to run SLDRT models with Matlab/Simulink R2024a.
The models were functionnal under Matlab R2023a
In Matlab/Simulink R2024a, the analog inputs of my NI-PCI 6321 card update at a very slow pace and the analog outputs never update.
To ensure that this is not a Windows 11 issue, I reverted back to Matlab/Simulink R2023a and the old models still worked fine.
Is there a real issue or am I missing something? I am trying to run SLDRT models with Matlab/Simulink R2024a.
The models were functionnal under Matlab R2023a
In Matlab/Simulink R2024a, the analog inputs of my NI-PCI 6321 card update at a very slow pace and the analog outputs never update.
To ensure that this is not a Windows 11 issue, I reverted back to Matlab/Simulink R2023a and the old models still worked fine.
Is there a real issue or am I missing something? simulink desktop real-time, matlab r2024a MATLAB Answers — New Questions
Simulating Levy walk in MATLAB. (Not Levy Flight)
I am trying to simulate Levy Walk in 2D (Not Levy Flight). I do get the MSD and VCAF correctly, but I don’t get the step length distribution correctly ( Levy Walk step lengths have heavy tails). I don’t know my simulation is entirely correct. Can someone confirm my code ?
alpha=1.4;
x(1)=0;
y(1)=0;
n = 100; %
dt =0.1; % time step
v=1; % velocity
for i=1:n
t= round(abs((rand()).^(-1/alpha))./dt); % time before the change in direction taken from a simple stable distribution
theta = 2*pi*rand;
time(i)=t;
for j=1:t
x(end+1) = x(end) + v*dt*cos(theta);
y(end+1) = y(end) + v*dt*sin(theta);
end
end
figure(1);
plot(x, y, ‘-‘);
%% Distribution of step size
dx = diff(x);
dy = diff(y);
vxy=([dx, dy]);
bins = linspace(min(vxy), max(vxy), 75);
[count, edge]= histcounts(vxy, bins, ‘Normalization’, ‘PDF’);
pd= fitdist(transpose(vxy),’Normal’);
countfit=pdf(pd, edge(1:end-1));
figure(3)
semilogy(edge(1:end-1), count, ‘o’, ‘LineWidth’,2.0); hold on;
semilogy(edge(1:end-1),countfit,’-r’,’LineWidth’,2.0); hold off;I am trying to simulate Levy Walk in 2D (Not Levy Flight). I do get the MSD and VCAF correctly, but I don’t get the step length distribution correctly ( Levy Walk step lengths have heavy tails). I don’t know my simulation is entirely correct. Can someone confirm my code ?
alpha=1.4;
x(1)=0;
y(1)=0;
n = 100; %
dt =0.1; % time step
v=1; % velocity
for i=1:n
t= round(abs((rand()).^(-1/alpha))./dt); % time before the change in direction taken from a simple stable distribution
theta = 2*pi*rand;
time(i)=t;
for j=1:t
x(end+1) = x(end) + v*dt*cos(theta);
y(end+1) = y(end) + v*dt*sin(theta);
end
end
figure(1);
plot(x, y, ‘-‘);
%% Distribution of step size
dx = diff(x);
dy = diff(y);
vxy=([dx, dy]);
bins = linspace(min(vxy), max(vxy), 75);
[count, edge]= histcounts(vxy, bins, ‘Normalization’, ‘PDF’);
pd= fitdist(transpose(vxy),’Normal’);
countfit=pdf(pd, edge(1:end-1));
figure(3)
semilogy(edge(1:end-1), count, ‘o’, ‘LineWidth’,2.0); hold on;
semilogy(edge(1:end-1),countfit,’-r’,’LineWidth’,2.0); hold off; I am trying to simulate Levy Walk in 2D (Not Levy Flight). I do get the MSD and VCAF correctly, but I don’t get the step length distribution correctly ( Levy Walk step lengths have heavy tails). I don’t know my simulation is entirely correct. Can someone confirm my code ?
alpha=1.4;
x(1)=0;
y(1)=0;
n = 100; %
dt =0.1; % time step
v=1; % velocity
for i=1:n
t= round(abs((rand()).^(-1/alpha))./dt); % time before the change in direction taken from a simple stable distribution
theta = 2*pi*rand;
time(i)=t;
for j=1:t
x(end+1) = x(end) + v*dt*cos(theta);
y(end+1) = y(end) + v*dt*sin(theta);
end
end
figure(1);
plot(x, y, ‘-‘);
%% Distribution of step size
dx = diff(x);
dy = diff(y);
vxy=([dx, dy]);
bins = linspace(min(vxy), max(vxy), 75);
[count, edge]= histcounts(vxy, bins, ‘Normalization’, ‘PDF’);
pd= fitdist(transpose(vxy),’Normal’);
countfit=pdf(pd, edge(1:end-1));
figure(3)
semilogy(edge(1:end-1), count, ‘o’, ‘LineWidth’,2.0); hold on;
semilogy(edge(1:end-1),countfit,’-r’,’LineWidth’,2.0); hold off; randomwalk, biology, levywalk, physics MATLAB Answers — New Questions
Archive
Hi Outlook Team,
Good Day , Customer wants to clarify the following questions regarding the Archive Process.
According to the MRM policy applied to user mailboxes, emails do not immediately appear in the Archive folder. They become visible to end users after 7 days.
Where do these emails reside during those seven days, and is this behavior expected?
The customer wants to comprehend the behavior of email disappearance.
If email disappearance is intentional by design, the customer seeks a change in this process.
Hi Outlook Team,
Good Day , Customer wants to clarify the following questions regarding the Archive Process.
According to the MRM policy applied to user mailboxes, emails do not immediately appear in the Archive folder. They become visible to end users after 7 days.
Where do these emails reside during those seven days, and is this behavior expected?
The customer wants to comprehend the behavior of email disappearance.
If email disappearance is intentional by design, the customer seeks a change in this process. Read More
resolution of MDOF using ode45
i have a problem solving the system with ode45. the code works but the displacement in graphed output is not what i would expect from a chirp signal. What could be the error in my code?
%MATRIX
M=diag([m1, m2, m3, m4, m5, m6, m7, m8, m9, m10, m11, m12, m13, m14, m15, m16, m17, m18, m19]);
% stiffness matrix 19×19
K = zeros(19,19);
K(1,1) = k1 + k4 + k7 + k8;
K(1,2) = -k1;
K(1,5) = -k4;
K(1,7) = -k7;
K(1,8) = -k8;
K(2,1) = -k1;
K(2,2) = k1 + k2;
K(2,3) = -k2;
K(3,2) = -k2;
K(3,3) = k2 + k3;
K(3,4) = -k3;
K(4,3) = -k3;
K(4,4) = k3;
K(5,1) = -k4;
K(5,5) = k4 + k5 + k6;
K(5,6) = -k5;
K(5,7) = -k6;
K(6,5) = -k5;
K(6,6) = k5 + k11;
K(6,11) = -k11;
K(7,1) = -k7;
K(7,5) = -k6;
K(7,7) = k6 + k7 + k23;
K(7,18) = -k23;
K(8,1) = -k8;
K(8,8) = k8 + k9;
K(8,9) = -k9;
K(9,8) = -k9;
K(9,9) = k9 + k10;
K(9,10) = -k10;
K(10,9) = -k10;
K(10,10) = k10;
K(11,6) = -k11;
K(11,7) = -k12;
K(11,11) = k11 + k12 + k13 + k14;
K(11,12) = -k13;
K(11,13) = -k14;
K(12,11) = -k13;
K(12,12) = k13 + k15;
K(12,14) = -k15;
K(13,11) = -k14;
K(13,13) = k14 + k16;
K(13,15) = -k16;
K(14,12) = -k15;
K(14,14) = k15 + k17;
K(14,16) = -k17;
K(15,13) = -k16;
K(15,15) = k16 + k18;
K(15,17) = -k18;
K(16,14) = -k17;
K(16,16) = k17 + k19;
K(17,15) = -k18;
K(17,17) = k18 + k20;
K(18,7) = -k23;
K(18,18) = k23 + k21 + k22;
K(18,19) = -k21 – k22;
K(19,18) = -k21 – k22;
K(19,19) = k21 + k22;
% damping matrix 19×19
C = zeros(19,19);
C(1,1) = c1 + c4 + c7 + c8;
C(1,2) = -c1;
C(1,5) = -c4;
C(1,7) = -c7;
C(1,8) = -c8;
C(2,1) = -c1;
C(2,2) = c1 + c2;
C(2,3) = -c2;
C(3,2) = -c2;
C(3,3) = c2 + c3;
C(3,4) = -c3;
C(4,3) = -c3;
C(4,4) = c3;
C(5,1) = -c4;
C(5,5) = c4 + c5 + c6;
C(5,6) = -c5;
C(5,7) = -c6;
C(6,5) = -c5;
C(6,6) = c5 + c11;
C(6,11) = -c11;
C(7,1) = -c7;
C(7,5) = -c6;
C(7,7) = c6 + c7 + c23;
C(7,18) = -c23;
C(8,1) = -c8;
C(8,8) = c8 + c9;
C(8,9) = -c9;
C(9,8) = -c9;
C(9,9) = c9 + c10;
C(9,10) = -c10;
C(10,9) = -c10;
C(10,10) = c10;
C(11,6) = -c11;
C(11,7) = -c12;
C(11,11) = c11 + c12 + c13 + c14;
C(11,12) = -c13;
C(11,13) = -c14;
C(12,11) = -c13;
C(12,12) = c13 + c15;
C(12,14) = -c15;
C(13,11) = -c14;
C(13,13) = c14 + c16;
C(13,15) = -c16;
C(14,12) = -c15;
C(14,14) = c15 + c17;
C(14,16) = -c17;
C(15,13) = -c16;
C(15,15) = c16 + c18;
C(15,17) = -c18;
C(16,14) = -c17;
C(16,16) = c17 + c19;
C(17,15) = -c18;
C(17,17) = c18 + c20;
C(18,7) = -c23;
C(18,18) = c23 + c21 + c22;
C(18,19) = -c21 – c22;
C(19,18) = -c21 – c22;
C(19,19) = c21 + c22;
n=19;
y0 = zeros(2*n,1);
tspan = [0 120];
% ode45
[t, y] = ode45(@(t, y) odefcn_standing(t, y, M, C, K), tspan, y0);
figure;
plot(t, y(:, 19));
xlabel(‘Time (s)’);
ylabel(‘Displacement (m)’);
% legend(‘y1’, ‘y2’, ‘y3’);
title(‘response of the system 19DOF’);
grid on;
function dy = odefcn_standing(t, y, M, C, K)
n = 19; % Numero di gradi di libertà
dy = zeros(2 * n, 1);
% Construction of matrix A
A = [zeros(n), eye(n);
-inv(M) * K, -inv(M) * C];
F = zeros(19, 1);
f0 = 0.5; % initial frequency
f1 = 80; % final frequency
t_f = 120; % duration of chirp signal
chirp_signal = chirp(t, f0, t_f, f1);
F(16,:) = 10*chirp_signal; % on mass 16
F(17,:) = 10*chirp_signal; % on mass 17
% Construction of matrix B
B = [zeros(n, n); inv(M)];
dy = A * y + B * F;
endi have a problem solving the system with ode45. the code works but the displacement in graphed output is not what i would expect from a chirp signal. What could be the error in my code?
%MATRIX
M=diag([m1, m2, m3, m4, m5, m6, m7, m8, m9, m10, m11, m12, m13, m14, m15, m16, m17, m18, m19]);
% stiffness matrix 19×19
K = zeros(19,19);
K(1,1) = k1 + k4 + k7 + k8;
K(1,2) = -k1;
K(1,5) = -k4;
K(1,7) = -k7;
K(1,8) = -k8;
K(2,1) = -k1;
K(2,2) = k1 + k2;
K(2,3) = -k2;
K(3,2) = -k2;
K(3,3) = k2 + k3;
K(3,4) = -k3;
K(4,3) = -k3;
K(4,4) = k3;
K(5,1) = -k4;
K(5,5) = k4 + k5 + k6;
K(5,6) = -k5;
K(5,7) = -k6;
K(6,5) = -k5;
K(6,6) = k5 + k11;
K(6,11) = -k11;
K(7,1) = -k7;
K(7,5) = -k6;
K(7,7) = k6 + k7 + k23;
K(7,18) = -k23;
K(8,1) = -k8;
K(8,8) = k8 + k9;
K(8,9) = -k9;
K(9,8) = -k9;
K(9,9) = k9 + k10;
K(9,10) = -k10;
K(10,9) = -k10;
K(10,10) = k10;
K(11,6) = -k11;
K(11,7) = -k12;
K(11,11) = k11 + k12 + k13 + k14;
K(11,12) = -k13;
K(11,13) = -k14;
K(12,11) = -k13;
K(12,12) = k13 + k15;
K(12,14) = -k15;
K(13,11) = -k14;
K(13,13) = k14 + k16;
K(13,15) = -k16;
K(14,12) = -k15;
K(14,14) = k15 + k17;
K(14,16) = -k17;
K(15,13) = -k16;
K(15,15) = k16 + k18;
K(15,17) = -k18;
K(16,14) = -k17;
K(16,16) = k17 + k19;
K(17,15) = -k18;
K(17,17) = k18 + k20;
K(18,7) = -k23;
K(18,18) = k23 + k21 + k22;
K(18,19) = -k21 – k22;
K(19,18) = -k21 – k22;
K(19,19) = k21 + k22;
% damping matrix 19×19
C = zeros(19,19);
C(1,1) = c1 + c4 + c7 + c8;
C(1,2) = -c1;
C(1,5) = -c4;
C(1,7) = -c7;
C(1,8) = -c8;
C(2,1) = -c1;
C(2,2) = c1 + c2;
C(2,3) = -c2;
C(3,2) = -c2;
C(3,3) = c2 + c3;
C(3,4) = -c3;
C(4,3) = -c3;
C(4,4) = c3;
C(5,1) = -c4;
C(5,5) = c4 + c5 + c6;
C(5,6) = -c5;
C(5,7) = -c6;
C(6,5) = -c5;
C(6,6) = c5 + c11;
C(6,11) = -c11;
C(7,1) = -c7;
C(7,5) = -c6;
C(7,7) = c6 + c7 + c23;
C(7,18) = -c23;
C(8,1) = -c8;
C(8,8) = c8 + c9;
C(8,9) = -c9;
C(9,8) = -c9;
C(9,9) = c9 + c10;
C(9,10) = -c10;
C(10,9) = -c10;
C(10,10) = c10;
C(11,6) = -c11;
C(11,7) = -c12;
C(11,11) = c11 + c12 + c13 + c14;
C(11,12) = -c13;
C(11,13) = -c14;
C(12,11) = -c13;
C(12,12) = c13 + c15;
C(12,14) = -c15;
C(13,11) = -c14;
C(13,13) = c14 + c16;
C(13,15) = -c16;
C(14,12) = -c15;
C(14,14) = c15 + c17;
C(14,16) = -c17;
C(15,13) = -c16;
C(15,15) = c16 + c18;
C(15,17) = -c18;
C(16,14) = -c17;
C(16,16) = c17 + c19;
C(17,15) = -c18;
C(17,17) = c18 + c20;
C(18,7) = -c23;
C(18,18) = c23 + c21 + c22;
C(18,19) = -c21 – c22;
C(19,18) = -c21 – c22;
C(19,19) = c21 + c22;
n=19;
y0 = zeros(2*n,1);
tspan = [0 120];
% ode45
[t, y] = ode45(@(t, y) odefcn_standing(t, y, M, C, K), tspan, y0);
figure;
plot(t, y(:, 19));
xlabel(‘Time (s)’);
ylabel(‘Displacement (m)’);
% legend(‘y1’, ‘y2’, ‘y3’);
title(‘response of the system 19DOF’);
grid on;
function dy = odefcn_standing(t, y, M, C, K)
n = 19; % Numero di gradi di libertà
dy = zeros(2 * n, 1);
% Construction of matrix A
A = [zeros(n), eye(n);
-inv(M) * K, -inv(M) * C];
F = zeros(19, 1);
f0 = 0.5; % initial frequency
f1 = 80; % final frequency
t_f = 120; % duration of chirp signal
chirp_signal = chirp(t, f0, t_f, f1);
F(16,:) = 10*chirp_signal; % on mass 16
F(17,:) = 10*chirp_signal; % on mass 17
% Construction of matrix B
B = [zeros(n, n); inv(M)];
dy = A * y + B * F;
end i have a problem solving the system with ode45. the code works but the displacement in graphed output is not what i would expect from a chirp signal. What could be the error in my code?
%MATRIX
M=diag([m1, m2, m3, m4, m5, m6, m7, m8, m9, m10, m11, m12, m13, m14, m15, m16, m17, m18, m19]);
% stiffness matrix 19×19
K = zeros(19,19);
K(1,1) = k1 + k4 + k7 + k8;
K(1,2) = -k1;
K(1,5) = -k4;
K(1,7) = -k7;
K(1,8) = -k8;
K(2,1) = -k1;
K(2,2) = k1 + k2;
K(2,3) = -k2;
K(3,2) = -k2;
K(3,3) = k2 + k3;
K(3,4) = -k3;
K(4,3) = -k3;
K(4,4) = k3;
K(5,1) = -k4;
K(5,5) = k4 + k5 + k6;
K(5,6) = -k5;
K(5,7) = -k6;
K(6,5) = -k5;
K(6,6) = k5 + k11;
K(6,11) = -k11;
K(7,1) = -k7;
K(7,5) = -k6;
K(7,7) = k6 + k7 + k23;
K(7,18) = -k23;
K(8,1) = -k8;
K(8,8) = k8 + k9;
K(8,9) = -k9;
K(9,8) = -k9;
K(9,9) = k9 + k10;
K(9,10) = -k10;
K(10,9) = -k10;
K(10,10) = k10;
K(11,6) = -k11;
K(11,7) = -k12;
K(11,11) = k11 + k12 + k13 + k14;
K(11,12) = -k13;
K(11,13) = -k14;
K(12,11) = -k13;
K(12,12) = k13 + k15;
K(12,14) = -k15;
K(13,11) = -k14;
K(13,13) = k14 + k16;
K(13,15) = -k16;
K(14,12) = -k15;
K(14,14) = k15 + k17;
K(14,16) = -k17;
K(15,13) = -k16;
K(15,15) = k16 + k18;
K(15,17) = -k18;
K(16,14) = -k17;
K(16,16) = k17 + k19;
K(17,15) = -k18;
K(17,17) = k18 + k20;
K(18,7) = -k23;
K(18,18) = k23 + k21 + k22;
K(18,19) = -k21 – k22;
K(19,18) = -k21 – k22;
K(19,19) = k21 + k22;
% damping matrix 19×19
C = zeros(19,19);
C(1,1) = c1 + c4 + c7 + c8;
C(1,2) = -c1;
C(1,5) = -c4;
C(1,7) = -c7;
C(1,8) = -c8;
C(2,1) = -c1;
C(2,2) = c1 + c2;
C(2,3) = -c2;
C(3,2) = -c2;
C(3,3) = c2 + c3;
C(3,4) = -c3;
C(4,3) = -c3;
C(4,4) = c3;
C(5,1) = -c4;
C(5,5) = c4 + c5 + c6;
C(5,6) = -c5;
C(5,7) = -c6;
C(6,5) = -c5;
C(6,6) = c5 + c11;
C(6,11) = -c11;
C(7,1) = -c7;
C(7,5) = -c6;
C(7,7) = c6 + c7 + c23;
C(7,18) = -c23;
C(8,1) = -c8;
C(8,8) = c8 + c9;
C(8,9) = -c9;
C(9,8) = -c9;
C(9,9) = c9 + c10;
C(9,10) = -c10;
C(10,9) = -c10;
C(10,10) = c10;
C(11,6) = -c11;
C(11,7) = -c12;
C(11,11) = c11 + c12 + c13 + c14;
C(11,12) = -c13;
C(11,13) = -c14;
C(12,11) = -c13;
C(12,12) = c13 + c15;
C(12,14) = -c15;
C(13,11) = -c14;
C(13,13) = c14 + c16;
C(13,15) = -c16;
C(14,12) = -c15;
C(14,14) = c15 + c17;
C(14,16) = -c17;
C(15,13) = -c16;
C(15,15) = c16 + c18;
C(15,17) = -c18;
C(16,14) = -c17;
C(16,16) = c17 + c19;
C(17,15) = -c18;
C(17,17) = c18 + c20;
C(18,7) = -c23;
C(18,18) = c23 + c21 + c22;
C(18,19) = -c21 – c22;
C(19,18) = -c21 – c22;
C(19,19) = c21 + c22;
n=19;
y0 = zeros(2*n,1);
tspan = [0 120];
% ode45
[t, y] = ode45(@(t, y) odefcn_standing(t, y, M, C, K), tspan, y0);
figure;
plot(t, y(:, 19));
xlabel(‘Time (s)’);
ylabel(‘Displacement (m)’);
% legend(‘y1’, ‘y2’, ‘y3’);
title(‘response of the system 19DOF’);
grid on;
function dy = odefcn_standing(t, y, M, C, K)
n = 19; % Numero di gradi di libertà
dy = zeros(2 * n, 1);
% Construction of matrix A
A = [zeros(n), eye(n);
-inv(M) * K, -inv(M) * C];
F = zeros(19, 1);
f0 = 0.5; % initial frequency
f1 = 80; % final frequency
t_f = 120; % duration of chirp signal
chirp_signal = chirp(t, f0, t_f, f1);
F(16,:) = 10*chirp_signal; % on mass 16
F(17,:) = 10*chirp_signal; % on mass 17
% Construction of matrix B
B = [zeros(n, n); inv(M)];
dy = A * y + B * F;
end ode45, mdof MATLAB Answers — New Questions
Getting the Error “Not enough input arguments” but I did the exact same method earlier in the code without issue..
clear all
close all
clc
% EMEC-342 Mini Project: 4-Bar Linkage Analysis
% Known Values
a=10; % cm
b=25; %cm
c=25; %cm
d = 20; % cm
AP=50; % cm
n=a/2;
q=c/2;
delta2=0;
delta3=0;
delta4=0;
w2=10; %rad/sec
alpha2=0;
oc=1;
t2=zeros(1,361); % rotation angle theta2 of O2A
for (i=1:361)
t2=i-1;
end
% Calculation of K1,K2,K3,K4,K5
K1=d/a;
K2=d/c;
K3=(a^2-b^2+c^2+d^2)/(2*a*c);
K4=d/b;
K5=(c^2-d^2-a^2-b^2)/(2*a*b);
%% Matlab Functions
function f=Grashof(lengths)
u=sort(lengths);
if((u(1)+u(4))<(u(2)+u(3)))
f=1;
elseif (u(1)+u(4))==(u(2)+u(3))
f=0;
else
f=-1;
end
end
%% Functions for calculation of angular orientations theta3, theta4
% of links AB and O4B
% Calculation of A
function AA=A(K1,K2,K3,t2)
AA=cos(t2)-K1-K2*cos(t2)+K3;
end
% Calculation of B
function BB=B(t2)
BB=-2*sin(t2);
end
% Calculation of C
function CC=C(K1,K2,K3,t2)
CC=K1-(K2+1)*cos(t2)+K3;
end
% Calculation of angular orientation theta4
function t4=theta4(K1,K2,K3,t2,oc)
AA = A(K1,K2,K3,theta2);
BB = B(theta2);
CC = C(K1,K2,K3,theta2);
t4=2*atan((-BB+oc*sqrt(BB^2-4*AA*CC))/(2*AA));
end
% Calculation of D
function DD=D(K1,K4,K5,t2)
DD=cos(t2)-K1+(K4*cos(t2))+K5;
end
% Calculation of E
function EE=E(t2)
EE=-2*sin(t2);
end
% Calculation of F
function FF=F(K1,K4,K5,t2)
FF=K1+(K4-1)*cos(t2)+K5;
end
% Calculation of angular orientation theta3
function t3=theta3(K1,K4,K5,t2,oc)
DD=D(K1,K4,K5,t2);
EE=E(t2);
FF=F(K1,K4,K5,t2);
t3=2*atan((-EE+oc*sqrt(EE^2-4*DD*FF))/(2*DD));
end
%% Functions for calculation of angular speeds omega3, omega4
% of links AB and O4B
%returns results as vector of x and y components
% returns x and y component
function as=angSpeed(a,b,c,w2,t2,t3,t4)
as=[w2*a/b*sin(t4-t2)/sin(t3-t4),w2*a/c*sin(t2-t3)/sin(t4-t3)];
end
%% Position Vectors
function r=RAO2(a,t2)
r= [a*cos(t2),a*sin(t2)];
end
function r=RPA(AP,t3,delta3)
r=AP*[cos(t3+delta3),sin(t3+delta3)];
end
function r=RPO2(a,PA,t2,t3,delta3)
r=RAO2(a,t2)+RPA(PA,t3,delta3);
end
%% Functions for calculation of angular acceleration alpha3, alpha4
% of links AB and O4B
%returns results as vector of x and y components
% returns x and y component
% Calculation of G
function GG=G(c,theta4)
GG=c*sin(theta4);
end
% Calculation of H
function HH=H(b,theta3)
HH=b*sin(theta3);
end
% Calculation of I
function II=I(a,b,c,alpha2,w2,omega3,omega4,t2,theta3,theta4)
II=(a*alpha2*sin(t2))+(a*w2^2*cos(t2))+(b*omega3^2*cos(theta3))-(c*omega4^2*cos(theta4));
end
% Calculation of J
function JJ=J(c,theta4)
JJ=c*cos(theta4);
end
% Calculation of K
function KK=K(b,theta3)
KK=b*cos(theta3);
end
% Calculation of L
function LL=L(a,b,c,alpha2,w2,angSpeed,t2,theta3,theta4)
LL=(a*alpha2*cos(t2))+(a*w2^2*sin(t2))+(b*angSpeed(1)^2*sin(theta3))-(c*angSpeed(2)^2*sin(theta4));
end
function aa=angAccel(G,H,I,J,K,L)
GG=G(c,theta4);
HH=H(b,theta3);
II=I(a,b,c,alpha2,w2,omega3,omega4,t2,theta3,theta4);
JJ=J(c,theta4);
KK=K(b,theta3);
LL=L(a,b,c,alpha2,w2,angSpeed,t2,theta3,theta4);
aa=[(II*JJ-GG*LL)/(GG*KK-HH*JJ),(II*KK-HH*JJ)/(GG*KK-HH*JJ)];
end
%% Trace Point Velocity
function v=VA(a,w2,t2)
v=[-a*w2*sin(t2),a*w2*cos(t2)];
end
function v=VPA(AP,angSpeed,theta3,delta3)
v=AP*[-angSpeed(1)*sin(theta3+delta3),angSpeed(1)*cos(theta3+delta3)];
end
function v=VPO2(a,w2,angSpeed,t2,theta3,delta3,AP)
v=VA(a,w2,t2)+VPA(AP,angSpeed(1),theta3,delta3);
end
%% Trace Point Acceleration
function a=aA(a,alpha2,t2,w2)
a=[-a*alpha2*sin(t2),-a*w2^2*cos(t2)];
end
function a=APA(AP,angSpeed,theta3,delta3,angAccel)
a=AP*[-angAccel(1)*sin(theta3+delta3),-angSpeed(1)^2*cos(theta3+delta3)];
end
function a=APO2(a,w2,angSpeed,t2,theta3,delta3,AP)
a=aA(a,alpha2,t2,w2)+APA(AP,angSpeed(1),theta3,delta3,alpha3);
end
%% Tracepoint Acceleration N
function aN=ANO2(alpha2,t2,delta2,w2,RNO2)
aN=RNO2*[-alpha2*sin(t2+delta2)-(w2^2*cos(t2+deta2)),alpha2*cos(t2+delta2)-(w2^2*sin(t2+delta2))];
end
%% Plots
% Plot of theta3 and theta4 as functions of theta2
figure(1)
plot(t2,theta3,’r:’);
hold on
plot(t2,theta4,’b-‘);
% Plot of omega3 and omega4 as functions if theta2
figure(2)
plot(t2,angSpeed(1),’r:’);
hold on
plot(t2,angSpeed(2),’b-‘);
% Plot of alpha3 and alpha4 as functions of theta2
figure(3)
plot(t2,angAccel(1),’r:’);
hold on
plot(t2,angAccel(2),’b-‘);
% Plot of RPO2y as a function of RPO2x
figure(4)
plot(RPO2(1),RPO2(2));
% Plot of VPO2x as a function of RPO2x
figure(5)
plot(RPO2(1),VPO2(1));
% Plot of VPO2y as a function of RPO2y
figure(6)
plot(RPO2(2),VPO2(2));
% Plot of magnitude of VPO2 as a function of theta2
figure(7)
VPO2mag=sqrt(v(1,i)^2+v(2,i)^2);
plot(t2,VPO2mag);
% Plot of aPO2x as a function of RPO2x
figure(8)
plot(r(1),a(2));
% Plot of aPO2y as a function of RPO2y
figure(9)
plot(r(2),a(2));
% Plot of VPO2y as a function of RPO2y
figure(10)
aNO2mag=sqrt(aN(1,i)^2+aN(2,i)^2);
plot(t2,aNO2mag);clear all
close all
clc
% EMEC-342 Mini Project: 4-Bar Linkage Analysis
% Known Values
a=10; % cm
b=25; %cm
c=25; %cm
d = 20; % cm
AP=50; % cm
n=a/2;
q=c/2;
delta2=0;
delta3=0;
delta4=0;
w2=10; %rad/sec
alpha2=0;
oc=1;
t2=zeros(1,361); % rotation angle theta2 of O2A
for (i=1:361)
t2=i-1;
end
% Calculation of K1,K2,K3,K4,K5
K1=d/a;
K2=d/c;
K3=(a^2-b^2+c^2+d^2)/(2*a*c);
K4=d/b;
K5=(c^2-d^2-a^2-b^2)/(2*a*b);
%% Matlab Functions
function f=Grashof(lengths)
u=sort(lengths);
if((u(1)+u(4))<(u(2)+u(3)))
f=1;
elseif (u(1)+u(4))==(u(2)+u(3))
f=0;
else
f=-1;
end
end
%% Functions for calculation of angular orientations theta3, theta4
% of links AB and O4B
% Calculation of A
function AA=A(K1,K2,K3,t2)
AA=cos(t2)-K1-K2*cos(t2)+K3;
end
% Calculation of B
function BB=B(t2)
BB=-2*sin(t2);
end
% Calculation of C
function CC=C(K1,K2,K3,t2)
CC=K1-(K2+1)*cos(t2)+K3;
end
% Calculation of angular orientation theta4
function t4=theta4(K1,K2,K3,t2,oc)
AA = A(K1,K2,K3,theta2);
BB = B(theta2);
CC = C(K1,K2,K3,theta2);
t4=2*atan((-BB+oc*sqrt(BB^2-4*AA*CC))/(2*AA));
end
% Calculation of D
function DD=D(K1,K4,K5,t2)
DD=cos(t2)-K1+(K4*cos(t2))+K5;
end
% Calculation of E
function EE=E(t2)
EE=-2*sin(t2);
end
% Calculation of F
function FF=F(K1,K4,K5,t2)
FF=K1+(K4-1)*cos(t2)+K5;
end
% Calculation of angular orientation theta3
function t3=theta3(K1,K4,K5,t2,oc)
DD=D(K1,K4,K5,t2);
EE=E(t2);
FF=F(K1,K4,K5,t2);
t3=2*atan((-EE+oc*sqrt(EE^2-4*DD*FF))/(2*DD));
end
%% Functions for calculation of angular speeds omega3, omega4
% of links AB and O4B
%returns results as vector of x and y components
% returns x and y component
function as=angSpeed(a,b,c,w2,t2,t3,t4)
as=[w2*a/b*sin(t4-t2)/sin(t3-t4),w2*a/c*sin(t2-t3)/sin(t4-t3)];
end
%% Position Vectors
function r=RAO2(a,t2)
r= [a*cos(t2),a*sin(t2)];
end
function r=RPA(AP,t3,delta3)
r=AP*[cos(t3+delta3),sin(t3+delta3)];
end
function r=RPO2(a,PA,t2,t3,delta3)
r=RAO2(a,t2)+RPA(PA,t3,delta3);
end
%% Functions for calculation of angular acceleration alpha3, alpha4
% of links AB and O4B
%returns results as vector of x and y components
% returns x and y component
% Calculation of G
function GG=G(c,theta4)
GG=c*sin(theta4);
end
% Calculation of H
function HH=H(b,theta3)
HH=b*sin(theta3);
end
% Calculation of I
function II=I(a,b,c,alpha2,w2,omega3,omega4,t2,theta3,theta4)
II=(a*alpha2*sin(t2))+(a*w2^2*cos(t2))+(b*omega3^2*cos(theta3))-(c*omega4^2*cos(theta4));
end
% Calculation of J
function JJ=J(c,theta4)
JJ=c*cos(theta4);
end
% Calculation of K
function KK=K(b,theta3)
KK=b*cos(theta3);
end
% Calculation of L
function LL=L(a,b,c,alpha2,w2,angSpeed,t2,theta3,theta4)
LL=(a*alpha2*cos(t2))+(a*w2^2*sin(t2))+(b*angSpeed(1)^2*sin(theta3))-(c*angSpeed(2)^2*sin(theta4));
end
function aa=angAccel(G,H,I,J,K,L)
GG=G(c,theta4);
HH=H(b,theta3);
II=I(a,b,c,alpha2,w2,omega3,omega4,t2,theta3,theta4);
JJ=J(c,theta4);
KK=K(b,theta3);
LL=L(a,b,c,alpha2,w2,angSpeed,t2,theta3,theta4);
aa=[(II*JJ-GG*LL)/(GG*KK-HH*JJ),(II*KK-HH*JJ)/(GG*KK-HH*JJ)];
end
%% Trace Point Velocity
function v=VA(a,w2,t2)
v=[-a*w2*sin(t2),a*w2*cos(t2)];
end
function v=VPA(AP,angSpeed,theta3,delta3)
v=AP*[-angSpeed(1)*sin(theta3+delta3),angSpeed(1)*cos(theta3+delta3)];
end
function v=VPO2(a,w2,angSpeed,t2,theta3,delta3,AP)
v=VA(a,w2,t2)+VPA(AP,angSpeed(1),theta3,delta3);
end
%% Trace Point Acceleration
function a=aA(a,alpha2,t2,w2)
a=[-a*alpha2*sin(t2),-a*w2^2*cos(t2)];
end
function a=APA(AP,angSpeed,theta3,delta3,angAccel)
a=AP*[-angAccel(1)*sin(theta3+delta3),-angSpeed(1)^2*cos(theta3+delta3)];
end
function a=APO2(a,w2,angSpeed,t2,theta3,delta3,AP)
a=aA(a,alpha2,t2,w2)+APA(AP,angSpeed(1),theta3,delta3,alpha3);
end
%% Tracepoint Acceleration N
function aN=ANO2(alpha2,t2,delta2,w2,RNO2)
aN=RNO2*[-alpha2*sin(t2+delta2)-(w2^2*cos(t2+deta2)),alpha2*cos(t2+delta2)-(w2^2*sin(t2+delta2))];
end
%% Plots
% Plot of theta3 and theta4 as functions of theta2
figure(1)
plot(t2,theta3,’r:’);
hold on
plot(t2,theta4,’b-‘);
% Plot of omega3 and omega4 as functions if theta2
figure(2)
plot(t2,angSpeed(1),’r:’);
hold on
plot(t2,angSpeed(2),’b-‘);
% Plot of alpha3 and alpha4 as functions of theta2
figure(3)
plot(t2,angAccel(1),’r:’);
hold on
plot(t2,angAccel(2),’b-‘);
% Plot of RPO2y as a function of RPO2x
figure(4)
plot(RPO2(1),RPO2(2));
% Plot of VPO2x as a function of RPO2x
figure(5)
plot(RPO2(1),VPO2(1));
% Plot of VPO2y as a function of RPO2y
figure(6)
plot(RPO2(2),VPO2(2));
% Plot of magnitude of VPO2 as a function of theta2
figure(7)
VPO2mag=sqrt(v(1,i)^2+v(2,i)^2);
plot(t2,VPO2mag);
% Plot of aPO2x as a function of RPO2x
figure(8)
plot(r(1),a(2));
% Plot of aPO2y as a function of RPO2y
figure(9)
plot(r(2),a(2));
% Plot of VPO2y as a function of RPO2y
figure(10)
aNO2mag=sqrt(aN(1,i)^2+aN(2,i)^2);
plot(t2,aNO2mag); clear all
close all
clc
% EMEC-342 Mini Project: 4-Bar Linkage Analysis
% Known Values
a=10; % cm
b=25; %cm
c=25; %cm
d = 20; % cm
AP=50; % cm
n=a/2;
q=c/2;
delta2=0;
delta3=0;
delta4=0;
w2=10; %rad/sec
alpha2=0;
oc=1;
t2=zeros(1,361); % rotation angle theta2 of O2A
for (i=1:361)
t2=i-1;
end
% Calculation of K1,K2,K3,K4,K5
K1=d/a;
K2=d/c;
K3=(a^2-b^2+c^2+d^2)/(2*a*c);
K4=d/b;
K5=(c^2-d^2-a^2-b^2)/(2*a*b);
%% Matlab Functions
function f=Grashof(lengths)
u=sort(lengths);
if((u(1)+u(4))<(u(2)+u(3)))
f=1;
elseif (u(1)+u(4))==(u(2)+u(3))
f=0;
else
f=-1;
end
end
%% Functions for calculation of angular orientations theta3, theta4
% of links AB and O4B
% Calculation of A
function AA=A(K1,K2,K3,t2)
AA=cos(t2)-K1-K2*cos(t2)+K3;
end
% Calculation of B
function BB=B(t2)
BB=-2*sin(t2);
end
% Calculation of C
function CC=C(K1,K2,K3,t2)
CC=K1-(K2+1)*cos(t2)+K3;
end
% Calculation of angular orientation theta4
function t4=theta4(K1,K2,K3,t2,oc)
AA = A(K1,K2,K3,theta2);
BB = B(theta2);
CC = C(K1,K2,K3,theta2);
t4=2*atan((-BB+oc*sqrt(BB^2-4*AA*CC))/(2*AA));
end
% Calculation of D
function DD=D(K1,K4,K5,t2)
DD=cos(t2)-K1+(K4*cos(t2))+K5;
end
% Calculation of E
function EE=E(t2)
EE=-2*sin(t2);
end
% Calculation of F
function FF=F(K1,K4,K5,t2)
FF=K1+(K4-1)*cos(t2)+K5;
end
% Calculation of angular orientation theta3
function t3=theta3(K1,K4,K5,t2,oc)
DD=D(K1,K4,K5,t2);
EE=E(t2);
FF=F(K1,K4,K5,t2);
t3=2*atan((-EE+oc*sqrt(EE^2-4*DD*FF))/(2*DD));
end
%% Functions for calculation of angular speeds omega3, omega4
% of links AB and O4B
%returns results as vector of x and y components
% returns x and y component
function as=angSpeed(a,b,c,w2,t2,t3,t4)
as=[w2*a/b*sin(t4-t2)/sin(t3-t4),w2*a/c*sin(t2-t3)/sin(t4-t3)];
end
%% Position Vectors
function r=RAO2(a,t2)
r= [a*cos(t2),a*sin(t2)];
end
function r=RPA(AP,t3,delta3)
r=AP*[cos(t3+delta3),sin(t3+delta3)];
end
function r=RPO2(a,PA,t2,t3,delta3)
r=RAO2(a,t2)+RPA(PA,t3,delta3);
end
%% Functions for calculation of angular acceleration alpha3, alpha4
% of links AB and O4B
%returns results as vector of x and y components
% returns x and y component
% Calculation of G
function GG=G(c,theta4)
GG=c*sin(theta4);
end
% Calculation of H
function HH=H(b,theta3)
HH=b*sin(theta3);
end
% Calculation of I
function II=I(a,b,c,alpha2,w2,omega3,omega4,t2,theta3,theta4)
II=(a*alpha2*sin(t2))+(a*w2^2*cos(t2))+(b*omega3^2*cos(theta3))-(c*omega4^2*cos(theta4));
end
% Calculation of J
function JJ=J(c,theta4)
JJ=c*cos(theta4);
end
% Calculation of K
function KK=K(b,theta3)
KK=b*cos(theta3);
end
% Calculation of L
function LL=L(a,b,c,alpha2,w2,angSpeed,t2,theta3,theta4)
LL=(a*alpha2*cos(t2))+(a*w2^2*sin(t2))+(b*angSpeed(1)^2*sin(theta3))-(c*angSpeed(2)^2*sin(theta4));
end
function aa=angAccel(G,H,I,J,K,L)
GG=G(c,theta4);
HH=H(b,theta3);
II=I(a,b,c,alpha2,w2,omega3,omega4,t2,theta3,theta4);
JJ=J(c,theta4);
KK=K(b,theta3);
LL=L(a,b,c,alpha2,w2,angSpeed,t2,theta3,theta4);
aa=[(II*JJ-GG*LL)/(GG*KK-HH*JJ),(II*KK-HH*JJ)/(GG*KK-HH*JJ)];
end
%% Trace Point Velocity
function v=VA(a,w2,t2)
v=[-a*w2*sin(t2),a*w2*cos(t2)];
end
function v=VPA(AP,angSpeed,theta3,delta3)
v=AP*[-angSpeed(1)*sin(theta3+delta3),angSpeed(1)*cos(theta3+delta3)];
end
function v=VPO2(a,w2,angSpeed,t2,theta3,delta3,AP)
v=VA(a,w2,t2)+VPA(AP,angSpeed(1),theta3,delta3);
end
%% Trace Point Acceleration
function a=aA(a,alpha2,t2,w2)
a=[-a*alpha2*sin(t2),-a*w2^2*cos(t2)];
end
function a=APA(AP,angSpeed,theta3,delta3,angAccel)
a=AP*[-angAccel(1)*sin(theta3+delta3),-angSpeed(1)^2*cos(theta3+delta3)];
end
function a=APO2(a,w2,angSpeed,t2,theta3,delta3,AP)
a=aA(a,alpha2,t2,w2)+APA(AP,angSpeed(1),theta3,delta3,alpha3);
end
%% Tracepoint Acceleration N
function aN=ANO2(alpha2,t2,delta2,w2,RNO2)
aN=RNO2*[-alpha2*sin(t2+delta2)-(w2^2*cos(t2+deta2)),alpha2*cos(t2+delta2)-(w2^2*sin(t2+delta2))];
end
%% Plots
% Plot of theta3 and theta4 as functions of theta2
figure(1)
plot(t2,theta3,’r:’);
hold on
plot(t2,theta4,’b-‘);
% Plot of omega3 and omega4 as functions if theta2
figure(2)
plot(t2,angSpeed(1),’r:’);
hold on
plot(t2,angSpeed(2),’b-‘);
% Plot of alpha3 and alpha4 as functions of theta2
figure(3)
plot(t2,angAccel(1),’r:’);
hold on
plot(t2,angAccel(2),’b-‘);
% Plot of RPO2y as a function of RPO2x
figure(4)
plot(RPO2(1),RPO2(2));
% Plot of VPO2x as a function of RPO2x
figure(5)
plot(RPO2(1),VPO2(1));
% Plot of VPO2y as a function of RPO2y
figure(6)
plot(RPO2(2),VPO2(2));
% Plot of magnitude of VPO2 as a function of theta2
figure(7)
VPO2mag=sqrt(v(1,i)^2+v(2,i)^2);
plot(t2,VPO2mag);
% Plot of aPO2x as a function of RPO2x
figure(8)
plot(r(1),a(2));
% Plot of aPO2y as a function of RPO2y
figure(9)
plot(r(2),a(2));
% Plot of VPO2y as a function of RPO2y
figure(10)
aNO2mag=sqrt(aN(1,i)^2+aN(2,i)^2);
plot(t2,aNO2mag); error MATLAB Answers — New Questions