Month: May 2024
Getting Started with Reliability on Azure: Ensuring Cloud Applications Stay Up and Running
As businesses increasingly rely on cloud services, the imperative for robust cloud solutions has never been greater. Azure stands at the forefront of this realm, offering architects and technology leaders a platform where reliability is not just a feature — it’s a core tenet.
The Essence of Reliability in Azure
Reliability is the bedrock upon which cloud architectures stand, indicative of a system’s robustness to persistently deliver expected outcomes. It’s defined not only by a service’s uptime but also by its stringent adherence to defined Service Level Objectives (SLOs) and Service Level Agreements (SLAs). These crucial benchmarks encompass aspects such as Recovery Time Objective (RTO)—the time within which functions must be restored post-disruption—and Recovery Point Objective (RPO)—the maximum amount of data that can be lost or corrupted post disruption for normal operations to resume. RPO applies not only to storage services but also to other data services such as databases, caches, and queues.
In Azure, reliability means crafting services that are inherently designed to mitigate failures and swiftly rebound from them, with minimal-to-no disruptions experienced by end-users. This is achieved through a shared responsibility model: while Microsoft ensures the underlying infrastructure’s resilience, customers architect their solutions responsibly to exploit these provisions—fusing their understanding of business requirements with Azure’s powerful capabilities to uphold service continuity and meet or exceed their RTO and RPO.
The Pillars of Cloud Reliability
The pillars of cloud reliability are critical components of Azure’s architecture, designed to ensure dependable service delivery:
Robust Infrastructure: Azure operates a globally distributed network of data centers equipped with advanced redundancy capabilities. This infrastructure is pivotal in providing the resilient physical and virtual resources required for the high availability of applications.
Resilience by Design: Azure’s reliability is rooted in its strategic design choices. Solutions architected with resilience in mind are capable of withstanding operational pressures and rapidly recovering from disruptions, ensuring minimal impact on service continuity.
Continuous Operations: Rigorous monitoring, timely incident management, and ongoing system refinement are integral to maintaining the operational health of Azure services. This commitment to continuous operational excellence fortifies service reliability and addresses the evolving demands of cloud workloads.
The Frameworks and Tools Supporting Azure Reliability
Azure’s commitment to reliability is underpinned by two foundational frameworks: the Cloud Adoption Framework (CAF) and the Well-Architected Framework (WAF). These frameworks guide organizations through best practices, methodologies, and tools essential for building and maintaining reliable cloud solutions.
Cloud Adoption Framework (CAF):
The CAF provides an extensive set of guidelines, blueprints, and best practices that help streamline the journey to the cloud. It offers insights into readiness and planning, ensuring that foundational decisions support reliability from the outset. Key components include Azure Landing Zones, which configure networking, security, identity, and governance in line with Azure reliability principles.
Well-Architected Framework (WAF):
The WAF focuses on five key areas – cost optimization, operational excellence, performance efficiency, reliability, and security. It empowers architects to design resilient systems by adhering to five principles of architectural excellence in Azure. The reliability pillar of WAF emphasizes the importance of designing systems that are highly available, resilient, and can recover rapidly from failures.
Azure Service Reliability Features:
Each Azure service offers built-in features and tools tailored for enhancing reliability. Some of the important tools are:
Azure Site Recovery: This service ensures business continuity by replicating workloads from primary to secondary regions, facilitating quick failover and minimizing service disruption during outages.
Azure Monitor and Application Insights: Combined, these services provide advanced monitoring, analytics, and diagnostics capabilities, affording real-time operational intelligence that supports swift and proactive incident management.
Azure Automation: With a focus on reducing manual intervention, Azure Automation offers process automation, update management, and configuration features that enhance the reliability of services by eliminating human error.
Architecting for Reliability
Leveraging strategic design choices, Azure enables systems to recover rapidly from disruptions while ensuring continuous operations—a testament to Azure’s dedication to non-stop service excellence following our reliability design principles.
Azure Landing Zones: Building Blocks for Reliable Cloud Operations
Azure Landing Zones are pre-defined, customizable environments that follow Microsoft’s Cloud Adoption Framework. They provide a structured setup process that incorporates best practices for security, compliance, and governance—forming a reliable foundation for your cloud journey. When setting up your landing zones, consider these reliability-focused factors:
Network Topology: Utilize Azure’s robust networking features to design a topology that emphasizes redundancy and failover capabilities.
Resource Organization: Structure your resources for coherence and ease of management, aligning them with your reliability objectives.
Identity and Access Management: Implement tight security controls to prevent unauthorized access which can compromise reliability.
Governance: Establish policies that enforce operational consistency and compliance, adding another layer of reliability protection.
For more information refer to Azure Landing Zones.
Mission-Critical Reliability: Ensuring Resilience at Scale
For mission-critical services where the stakes are especially high, and reliability is imperative, Azure provides a robust toolkit and strategic methodologies to ensure resilience:
Geo-Redundancy: Implementing a multi-region architecture is pivotal for mission-critical applications. Azure facilitates the distribution of services across several geographic locations, safeguarding against regional failures. This approach not only enhances fault tolerance but also enables applications to remain functional and accessible, regardless of localized disruptions.
Disaster Recovery: To safeguard against significant and unexpected disasters, Azure Site Recovery offers a seamless replication service for virtual machines (VMs). It enables swift and structured failovers to alternate regions, ensuring critical applications experience minimal downtime. The service’s replication granularity empowers businesses to achieve their specific recovery objectives, be they related to RTO or RPO targets.
Auto-Scaling: Azure’s auto-scaling capabilities dynamically adjust resource counts to meet the workload’s current demands without human intervention for the services that support this feature. This is essential for meeting performance expectations during usage spikes or unpredicted load increases and for optimizing resource utilization during quieter periods. Such elasticity is vital for maintaining consistent performance levels and operational efficiency.
Monitoring and Diagnostics: Provisioning powerful monitoring tools like Azure Monitor and Azure Application Insights affords organizations real-time visibility into their operational landscape. With these tools, you gain actionable insights, can set up automated alerts for anomaly detection, and pre-empt potential issues based on trends and patterns. The detailed diagnostics provided support rapid issue identification and resolution, which is crucial for mission-critical systems.
By integrating these practices within the architectural fabric, mission-critical services on Azure can achieve the sought-after continuous reliability—delivering consistent service levels and fostering user trust and satisfaction.
For more information refer to Mission Critical Guidance.
Reference Architecture for Reliability
Reliability in the cloud isn’t just about having the right tools and services; it’s about weaving those elements into an architecture that inherently embodies resilience and fault tolerance. A strategic approach towards crafting a reliable Azure architecture requires a holistic view that spans compute, storage, database, and networking resources.
To illustrate, let’s delve into a reference architecture that showcases Azure’s reliability principles in action. This architecture demonstrates how various Azure services interconnect to establish a dependable cloud infrastructure, ensuring seamless, continuous operations.
Detailing the Reference Architecture for Reliability
The reference architecture encompasses various Azure services, each contributing to the overall reliability in different ways. Below, we dissect this architecture to understand how the components interrelate and support each other to create a reliable and resilient environment:
Azure Compute Services:
Azure Virtual Machines (VMs): These serve as the backbone, hosting applications and services. To ensure their reliability, leverage Azure Backup, a service offering automated backup solutions that protect VMs from data loss and facilitate easy recovery. Integrating frequent and consistent backups safeguards your data against accidental deletions, corruption, or attacks.
Azure Site Recovery (ASR): Complementing Azure Backup, ASR provides a disaster recovery solution by replicating your Azure VMs to a different availability zone or region. In the event of an outage, you can orchestrate a failover to the replicated VMs situated in the secondary site. This setup ensures minimal downtime and adherence to RTOs (Recovery Time Objectives).
Azure Kubernetes Service (AKS):
Backup and Recovery: The fabric of modern applications often includes containerized solutions orchestrated by AKS. Reliable operation means deploying consistent backups of AKS cluster data, including Persistent Volume (PV) backups, Kubernetes resource configurations, and databases running within the cluster.
Multi-Zone Clusters: AKS supports pod distribution across Availability Zones within a region, ensuring workload continuity in case of a failure in one zone. You can also use services such as Azure Load Balancer or Azure Application Gateway to balance the traffic across zones.
Multi-Regional Clusters: AKS supports deploying clusters across multiple regions, enhancing the resilience and scalability of your applications. You can use services such as Azure Traffic Manager and CosmosDB to distribute the user traffic and data across regions, and orchestrate failover scenarios using Azure Site Recovery.
Azure Storage Services:
Geo-replication: Storage services such as Azure Blob Storage and Azure Queue Storage employ geo-replication strategies to synchronize data across geographically distributed data centers. By doing so, they provide data availability protection against regional outages.
Redundant Storage: Redundancy options, such as Locally-Redundant Storage (LRS) or Zone-Redundant Storage (ZRS), ensure that copies of your data are safely stored within a region or across multiple locations within a region, further fortifying data protection measures.
Azure Database Services:
Automated Backups: Azure services like Azure SQL Database and Azure Cosmos DB offer automated backup features. Automated backups provide a low maintenance approach to protect your databases, enabling the ability to restore databases to a previous point in time quickly in case of data corruption or loss.
Geo-Restore: In addition to regular backups, geo-restore functionalities allow restoration of databases across different geographical regions. In disaster events, this ability is pivotal in maintaining operational continuity and data availability.
By following these architectural principles, you design a robust system that inherently includes resilience and reliability into every layer of its stack. From compute resources down to data storage, the architecture facilitates a cohesive approach to disaster recovery, high availability, and operational effectiveness.
A well-constructed architecture is a critical element in the journey to achieving high reliability on Azure. A reference architecture serves as the blueprint for integrating Azure’s resilience principles into your applications. By doing so, you design an ecosystem that not only copes with adverse events but also sustains service continuity and data integrity, thereby meeting high availability standards.
Azure Verified Modules for Reliability
Azure Verified Modules (AVM) is an initiative to consolidate and set the standards for what a good Infrastructure-as-Code module looks like. AVM is a common code base, a toolkit for our Customers, our Partners to accelerate consistent solution development and delivery of cloud-native by codifying Microsoft guidance (WAF), with best practice configurations.
In this article we want to highlight a sample AVM module designed to set up a reliable Azure-to-Azure replication for disaster recovery. It supports replication across regions or within the same region across zone. It’s located at GitHub – Azure/terraform-azurerm-avm-ptn-bcdr-vm-replication: AVM Pattern Module to use Azure Site Recovery to replicate Virtual Machines at Scale between locations. The module replicates virtual machines within Azure from a source to a target location, handling all intermediary replication policies, and resource configurations.
This module provides functionality for:
Creating or using an existing Recovery Services Vault.
Replicating virtual machines between Azure regions or between zones within the same region.
Handling recovery policies, replication policies, and protection container mappings.
Dealing with resource dependencies for orderly creation and deletion.
Conclusion
Commencing your reliability journey on Azure signals a commitment to operational excellence. By leveraging Azure’s global infrastructure, proactive design strategies, and a comprehensive suite of tools and best practices, you can pave the way for reliable, scalable, and resilient cloud environments. Elevate your cloud solutions to be ready for any challenge with the power of Azure’s reliability features.
Become a champion of reliability and let Azure be the silent force empowering you to deliver steadfast cloud solutions to your stakeholders and customers—today, tomorrow, and into the future.
Thanks to the people that contributed to this article: Harshitha Putta, Laura Grob, Zach Olinske and the PaceSetter reliability team.
Microsoft Tech Community – Latest Blogs –Read More
How to quickly share files between Windows 10 and MacBook?
I bought a Mac recently and I need to share files between the two devices frequently. Now, i copy and paste the files with a USB flash drive. Is there any quick way better than USB?
I bought a Mac recently and I need to share files between the two devices frequently. Now, i copy and paste the files with a USB flash drive. Is there any quick way better than USB? Read More
Unable to register for Microsoft Innovation Challenge
I completed the certification requirement for the Microsoft Innovation Challenge, but when I try to register for the Hackathon using the site in the voucher email, I get an error. I’m using the same credentials I used for Microsoft Learn, and after logging in, I get this message:
Request Id: 2e78fbe9-d937-4238-93c4-00350f291c03
Correlation Id: bb5160aa-afd1-492d-8456-723682b9edbc
Timestamp: 2024-05-27T03:22:12Z
Message: AADSTS500200: User account ’email address removed for privacy reasons’ is a personal Microsoft account. Personal Microsoft accounts are not supported for this application unless explicitly invited to an organization. Try signing out and signing back in with an organizational account.
I’m not sure how to get past this error and have contacted both the organizer for BIT and Microsoft for support to no avail.
Can anyone provide any insight on how to resolve this issue?
Website used to register:
https://blacksintechnology.org/microsoft-innovation-challenge/
I completed the certification requirement for the Microsoft Innovation Challenge, but when I try to register for the Hackathon using the site in the voucher email, I get an error. I’m using the same credentials I used for Microsoft Learn, and after logging in, I get this message:Request Id: 2e78fbe9-d937-4238-93c4-00350f291c03Correlation Id: bb5160aa-afd1-492d-8456-723682b9edbcTimestamp: 2024-05-27T03:22:12ZMessage: AADSTS500200: User account ’email address removed for privacy reasons’ is a personal Microsoft account. Personal Microsoft accounts are not supported for this application unless explicitly invited to an organization. Try signing out and signing back in with an organizational account.I’m not sure how to get past this error and have contacted both the organizer for BIT and Microsoft for support to no avail. Can anyone provide any insight on how to resolve this issue? Website used to register:https://blacksintechnology.org/microsoft-innovation-challenge/ @macalde Read More
How to Convert DBX to PST Format?
Download and install the XConverterPro DBX to PST Converter software on your PC. This software helps you convert .dbx files to .pst format while maintaining the email folder structure, ensuring no data loss. It supports DBX files created by Outlook Express older versions.
Steps to Convert DBX to PST in 2024
Step 1: Run XConverterPro DBX to PST Converter on your PC.
Step 2: Click on Open >> Email Data Files >> DBX files.
Step 3: Preview email content of .dbx files.
Step 4: Click Export & select PST as a saving option.
Step 5: Browse location and click Save button.
Finished! The software will start converting the DBX to PST format quickly. The best part is that you can also convert the DBX file to 10+ other file formats. Download the software for free and try it now.
Download and install the XConverterPro DBX to PST Converter software on your PC. This software helps you convert .dbx files to .pst format while maintaining the email folder structure, ensuring no data loss. It supports DBX files created by Outlook Express older versions. Steps to Convert DBX to PST in 2024 Step 1: Run XConverterPro DBX to PST Converter on your PC.Step 2: Click on Open >> Email Data Files >> DBX files.Step 3: Preview email content of .dbx files.Step 4: Click Export & select PST as a saving option.Step 5: Browse location and click Save button.Finished! The software will start converting the DBX to PST format quickly. The best part is that you can also convert the DBX file to 10+ other file formats. Download the software for free and try it now. Read More
Is it the right time to upgrade Windows 10 to Windows 11?
My Windows 10 PC received a full screen prompt, asking me to upgrade to Windows 11. Unfortunately, I had a hard time in skipping the prompt. It seems like Microsoft is aggressively prompting people to get it. Has Win11 finally become worthy enough to make the switch?
My Windows 10 PC received a full screen prompt, asking me to upgrade to Windows 11. Unfortunately, I had a hard time in skipping the prompt. It seems like Microsoft is aggressively prompting people to get it. Has Win11 finally become worthy enough to make the switch? Read More
Laplace transform not getting Value
I tried to solve lapalce but not getting the value (Xs). Find the code below, Is there any issues in the code?
syms x(t) Xs
eqn = diff(x,t,2)+2*diff(x,t)+26*x(t) == 10*cos(t)*(heaviside(t-pi));
eqnLT = laplace(eqn)
eqnLT = subs(eqnLT,laplace(x(t)), Xs);
eqnLT = subs(eqnLT, {x(0), diff(x(t), t, 0)}, {1/2,1});
Xs = solve(eqnLT, Xs)I tried to solve lapalce but not getting the value (Xs). Find the code below, Is there any issues in the code?
syms x(t) Xs
eqn = diff(x,t,2)+2*diff(x,t)+26*x(t) == 10*cos(t)*(heaviside(t-pi));
eqnLT = laplace(eqn)
eqnLT = subs(eqnLT,laplace(x(t)), Xs);
eqnLT = subs(eqnLT, {x(0), diff(x(t), t, 0)}, {1/2,1});
Xs = solve(eqnLT, Xs) I tried to solve lapalce but not getting the value (Xs). Find the code below, Is there any issues in the code?
syms x(t) Xs
eqn = diff(x,t,2)+2*diff(x,t)+26*x(t) == 10*cos(t)*(heaviside(t-pi));
eqnLT = laplace(eqn)
eqnLT = subs(eqnLT,laplace(x(t)), Xs);
eqnLT = subs(eqnLT, {x(0), diff(x(t), t, 0)}, {1/2,1});
Xs = solve(eqnLT, Xs) lapalce, equation, xs MATLAB Answers — New Questions
How to code double pendulum by using rk4
Please help me with this problem.
It has to satisfy these conditions below.
Simulate the motion of the double pendulum for the following two initial conditions and observe the difference in motion (butterfly effect)
Initial conditions 1: Initial angles theta=pi/2, omg=pi/2 (initial speeds are all 0)
Initial conditions 2: Initial angles theta=pi/2, omg=pi/2+0.001 (initial speeds are all 0)
Precautions 1: Use RK4
Precautions 2: fps should be 30 frames per second
Precautions 3: Calculate by changing dt=1/30/50, 1/30/100, 1/30/200, 1/30/400, etc. and find a reliable size of dt
Precautions 4: Simulate 25 seconds of exercisePlease help me with this problem.
It has to satisfy these conditions below.
Simulate the motion of the double pendulum for the following two initial conditions and observe the difference in motion (butterfly effect)
Initial conditions 1: Initial angles theta=pi/2, omg=pi/2 (initial speeds are all 0)
Initial conditions 2: Initial angles theta=pi/2, omg=pi/2+0.001 (initial speeds are all 0)
Precautions 1: Use RK4
Precautions 2: fps should be 30 frames per second
Precautions 3: Calculate by changing dt=1/30/50, 1/30/100, 1/30/200, 1/30/400, etc. and find a reliable size of dt
Precautions 4: Simulate 25 seconds of exercise Please help me with this problem.
It has to satisfy these conditions below.
Simulate the motion of the double pendulum for the following two initial conditions and observe the difference in motion (butterfly effect)
Initial conditions 1: Initial angles theta=pi/2, omg=pi/2 (initial speeds are all 0)
Initial conditions 2: Initial angles theta=pi/2, omg=pi/2+0.001 (initial speeds are all 0)
Precautions 1: Use RK4
Precautions 2: fps should be 30 frames per second
Precautions 3: Calculate by changing dt=1/30/50, 1/30/100, 1/30/200, 1/30/400, etc. and find a reliable size of dt
Precautions 4: Simulate 25 seconds of exercise double pendulum, rk4, matlab MATLAB Answers — New Questions
Increasing incoming traffic to the FSLogix profile server
Hello everybody!
We have configured FSLogix 2210 3 (2.9.8784.63912) on Windows server 2022 (April updates) running rds role(70 rdsh).
We only use Profile Containers. All profiles are on the same server, approximately 500 users.
After the first of May, incoming traffic to the server increased by 2 times to 1.2 Gbps.
We use O365, Onedrive, and Teams.
So far, we can’t find what apps is constantly being recorded in the user’s profile.
Is this a normal amount of traffic for 500 users?
Perhaps you have some ideas about this?
Hello everybody!We have configured FSLogix 2210 3 (2.9.8784.63912) on Windows server 2022 (April updates) running rds role(70 rdsh).We only use Profile Containers. All profiles are on the same server, approximately 500 users.After the first of May, incoming traffic to the server increased by 2 times to 1.2 Gbps.We use O365, Onedrive, and Teams.So far, we can’t find what apps is constantly being recorded in the user’s profile. Is this a normal amount of traffic for 500 users?Perhaps you have some ideas about this? Read More
Windows 11 Surface Software Keyboard Emoticon hide?
I have a surface tablet. I want to hide this icon. does anyone know how to do it? actually the entire bar, I want to re-capture the space, its a waste.
I have a surface tablet. I want to hide this icon. does anyone know how to do it? actually the entire bar, I want to re-capture the space, its a waste. Read More
How do I have Win11 File Explorer display Recycle Bin?
Version 23H2 (OS Build 22631.3447)
Recycle Bin used to be displayed with Win10 File Explorer. Win11 File Explorer does not display Recycle Bin.
When I search with “Everything”, it displays “$Recycle.Bin”. How do I have Win11 File Explorer display Recycle Bin?
Also, wondering why the $ in the title?
Version 23H2 (OS Build 22631.3447)Recycle Bin used to be displayed with Win10 File Explorer. Win11 File Explorer does not display Recycle Bin.When I search with “Everything”, it displays “$Recycle.Bin”. How do I have Win11 File Explorer display Recycle Bin?Also, wondering why the $ in the title? Read More
I need permission from Trusted Installer to delete folders
I want to delete a folder on my data HDD. It’s not my boot drive. I want it to look nice and organized for my backups, but this Program Files folder bothers me.
I want to delete Program Files and the folder inside of that called ModifiableWindowsApps. I did a CMD TAKEOWN of the folder Program Files and that was a success, but when I delete it, it says I need my own account’s permission to delete it and it won’t follow through. The folder inside of it called ModifiableWindowsApps still has TrustedInstaller and I can’t figure out how to address that in CMD to do a TAKEOWN of it, and I’m not sure a TAKEOWN would even help me here either. I have screenshots here to help.
The folders are empty and they’re holdovers from my old installation, and I use a different boot drive now, so it’s fine to delete them. I just can’t. Thanks all!
I want to delete a folder on my data HDD. It’s not my boot drive. I want it to look nice and organized for my backups, but this Program Files folder bothers me. I want to delete Program Files and the folder inside of that called ModifiableWindowsApps. I did a CMD TAKEOWN of the folder Program Files and that was a success, but when I delete it, it says I need my own account’s permission to delete it and it won’t follow through. The folder inside of it called ModifiableWindowsApps still has TrustedInstaller and I can’t figure out how to address that in CMD to do a TAKEOWN of it, and I’m not sure a TAKEOWN would even help me here either. I have screenshots here to help. The folders are empty and they’re holdovers from my old installation, and I use a different boot drive now, so it’s fine to delete them. I just can’t. Thanks all! Read More
Too many processes running background
In an effort to improve performance I have edited processes and services. As viewed in Task Manager > details, I have noticed a few entries with multiple versions running in the background. I can effectively stop them. I wish to be able to take preventive action to eliminate the multiple processes from building up. (see attached snip for a firefox example)
Please offer me helpful solutions.
lenovo laptop
OS: Windows 11 home
version: 23H2
In an effort to improve performance I have edited processes and services. As viewed in Task Manager > details, I have noticed a few entries with multiple versions running in the background. I can effectively stop them. I wish to be able to take preventive action to eliminate the multiple processes from building up. (see attached snip for a firefox example) Please offer me helpful solutions. lenovo laptopOS: Windows 11 homeversion: 23H2 Read More
How to process the data that have same row name
Hello. Currently I am doing a testing codes. But then, I found a loop problem which is too challenging for me since I am already facing it more than ten times (I am still a newbie). Below is my coding. My goals is to process the raw data and extract the features as stated in the code. Basically the problem that I am facing is that:
Error using tabular/vertcat Duplicate table row name: ‘2004.03.15.12.0’.
Error in Third_Test (line 55) Time_feature_matrix1 = [Time_feature_matrix1; df];
The file is too big so with that I share my link to GDrive: https://drive.google.com/drive/folders/1MltnJyAUh1BJpoPdNpqOUiHfhdbGF4wV?usp=drive_link
Here the Code:
% Initialize empty tables for each bearing
Time_feature_matrix1 = table();
Time_feature_matrix2 = table();
Time_feature_matrix3 = table();
Time_feature_matrix4 = table();
% Specify the test set and bearing numbers
test_set = 3;
bearing_numbers = [1, 2, 3, 4];
% Set the path to the directory containing the data files
path = ‘file directory’;
% Get list of files in the directory
files = dir(fullfile(path, ‘*’));
% Loop through the files in the directory
for j = 1:length(files)
filename = files(j).name;
if files(j).isdir % Skip directories
continue;
end
file_path = fullfile(path, filename); % Full path of the file
dataset = readtable(file_path, ‘Delimiter’, ‘t’, ‘FileType’, ‘text’); % Read the dataset
% Loop through each bearing number
for j = 1:length(bearing_numbers)
bearing_no = bearing_numbers(j);
% Extract the bearing data
bearing_data = dataset{:, bearing_no};
% Calculate features
feature_matrix = [
max(bearing_data), % Max
min(bearing_data), % Min
mean(bearing_data), % Mean
std(bearing_data, 1), % Std
sqrt(mean(bearing_data.^2)), % RMS
compute_skewness(bearing_data), % Skewness
compute_kurtosis(bearing_data), % Kurtosis
max(bearing_data) / sqrt(mean(bearing_data.^2)), % CrestFactor
sqrt(mean(bearing_data.^2)) / mean(bearing_data) % FormFactor
];
df = array2table(feature_matrix.’); % Transpose for correct orientation
df.Properties.VariableNames = {‘Max’, ‘Min’, ‘Mean’, ‘Std’, ‘RMS’, ‘Skewness’, ‘Kurtosis’, ‘CrestFactor’, ‘FormFactor’};
df.Properties.RowNames = {[filename(1:end-4)]}; % Append bearing number
df.Properties.RowNames = {rowName};
% Append the table to the corresponding bearing’s feature matrix
switch bearing_no
case 1
if ~ismember(rowName, Time_feature_matrix1.Properties.RowNames)
Time_feature_matrix1 = [Time_feature_matrix1; df];
end
case 2
if ~ismember(rowName, Time_feature_matrix2.Properties.RowNames)
Time_feature_matrix2 = [Time_feature_matrix2; df];
end
case 3
if ~ismember(rowName, Time_feature_matrix3.Properties.RowNames)
Time_feature_matrix3 = [Time_feature_matrix3; df];
end
case 4
if ~ismember(rowName, Time_feature_matrix4.Properties.RowNames)
Time_feature_matrix4 = [Time_feature_matrix4; df];
end
% case 1
% Time_feature_matrix1 = [Time_feature_matrix1; df];
% case 2
% Time_feature_matrix2 = [Time_feature_matrix2; df];
% case 3
% Time_feature_matrix3 = [Time_feature_matrix3; df];
% case 4
% Time_feature_matrix4 = [Time_feature_matrix4; df];
end
end
end
% Define function to compute skewness
function skewness = compute_skewness(x)
n = length(x);
third_moment = sum((x – mean(x)).^3) / n;
s_3 = std(x, 1)^3;
skewness = third_moment / s_3;
end
% Define function to compute kurtosis
function kurtosis = compute_kurtosis(x)
n = length(x);
fourth_moment = sum((x – mean(x)).^4) / n;
s_4 = std(x, 1)^4;
kurtosis = fourth_moment / s_4 – 3;
endHello. Currently I am doing a testing codes. But then, I found a loop problem which is too challenging for me since I am already facing it more than ten times (I am still a newbie). Below is my coding. My goals is to process the raw data and extract the features as stated in the code. Basically the problem that I am facing is that:
Error using tabular/vertcat Duplicate table row name: ‘2004.03.15.12.0’.
Error in Third_Test (line 55) Time_feature_matrix1 = [Time_feature_matrix1; df];
The file is too big so with that I share my link to GDrive: https://drive.google.com/drive/folders/1MltnJyAUh1BJpoPdNpqOUiHfhdbGF4wV?usp=drive_link
Here the Code:
% Initialize empty tables for each bearing
Time_feature_matrix1 = table();
Time_feature_matrix2 = table();
Time_feature_matrix3 = table();
Time_feature_matrix4 = table();
% Specify the test set and bearing numbers
test_set = 3;
bearing_numbers = [1, 2, 3, 4];
% Set the path to the directory containing the data files
path = ‘file directory’;
% Get list of files in the directory
files = dir(fullfile(path, ‘*’));
% Loop through the files in the directory
for j = 1:length(files)
filename = files(j).name;
if files(j).isdir % Skip directories
continue;
end
file_path = fullfile(path, filename); % Full path of the file
dataset = readtable(file_path, ‘Delimiter’, ‘t’, ‘FileType’, ‘text’); % Read the dataset
% Loop through each bearing number
for j = 1:length(bearing_numbers)
bearing_no = bearing_numbers(j);
% Extract the bearing data
bearing_data = dataset{:, bearing_no};
% Calculate features
feature_matrix = [
max(bearing_data), % Max
min(bearing_data), % Min
mean(bearing_data), % Mean
std(bearing_data, 1), % Std
sqrt(mean(bearing_data.^2)), % RMS
compute_skewness(bearing_data), % Skewness
compute_kurtosis(bearing_data), % Kurtosis
max(bearing_data) / sqrt(mean(bearing_data.^2)), % CrestFactor
sqrt(mean(bearing_data.^2)) / mean(bearing_data) % FormFactor
];
df = array2table(feature_matrix.’); % Transpose for correct orientation
df.Properties.VariableNames = {‘Max’, ‘Min’, ‘Mean’, ‘Std’, ‘RMS’, ‘Skewness’, ‘Kurtosis’, ‘CrestFactor’, ‘FormFactor’};
df.Properties.RowNames = {[filename(1:end-4)]}; % Append bearing number
df.Properties.RowNames = {rowName};
% Append the table to the corresponding bearing’s feature matrix
switch bearing_no
case 1
if ~ismember(rowName, Time_feature_matrix1.Properties.RowNames)
Time_feature_matrix1 = [Time_feature_matrix1; df];
end
case 2
if ~ismember(rowName, Time_feature_matrix2.Properties.RowNames)
Time_feature_matrix2 = [Time_feature_matrix2; df];
end
case 3
if ~ismember(rowName, Time_feature_matrix3.Properties.RowNames)
Time_feature_matrix3 = [Time_feature_matrix3; df];
end
case 4
if ~ismember(rowName, Time_feature_matrix4.Properties.RowNames)
Time_feature_matrix4 = [Time_feature_matrix4; df];
end
% case 1
% Time_feature_matrix1 = [Time_feature_matrix1; df];
% case 2
% Time_feature_matrix2 = [Time_feature_matrix2; df];
% case 3
% Time_feature_matrix3 = [Time_feature_matrix3; df];
% case 4
% Time_feature_matrix4 = [Time_feature_matrix4; df];
end
end
end
% Define function to compute skewness
function skewness = compute_skewness(x)
n = length(x);
third_moment = sum((x – mean(x)).^3) / n;
s_3 = std(x, 1)^3;
skewness = third_moment / s_3;
end
% Define function to compute kurtosis
function kurtosis = compute_kurtosis(x)
n = length(x);
fourth_moment = sum((x – mean(x)).^4) / n;
s_4 = std(x, 1)^4;
kurtosis = fourth_moment / s_4 – 3;
end Hello. Currently I am doing a testing codes. But then, I found a loop problem which is too challenging for me since I am already facing it more than ten times (I am still a newbie). Below is my coding. My goals is to process the raw data and extract the features as stated in the code. Basically the problem that I am facing is that:
Error using tabular/vertcat Duplicate table row name: ‘2004.03.15.12.0’.
Error in Third_Test (line 55) Time_feature_matrix1 = [Time_feature_matrix1; df];
The file is too big so with that I share my link to GDrive: https://drive.google.com/drive/folders/1MltnJyAUh1BJpoPdNpqOUiHfhdbGF4wV?usp=drive_link
Here the Code:
% Initialize empty tables for each bearing
Time_feature_matrix1 = table();
Time_feature_matrix2 = table();
Time_feature_matrix3 = table();
Time_feature_matrix4 = table();
% Specify the test set and bearing numbers
test_set = 3;
bearing_numbers = [1, 2, 3, 4];
% Set the path to the directory containing the data files
path = ‘file directory’;
% Get list of files in the directory
files = dir(fullfile(path, ‘*’));
% Loop through the files in the directory
for j = 1:length(files)
filename = files(j).name;
if files(j).isdir % Skip directories
continue;
end
file_path = fullfile(path, filename); % Full path of the file
dataset = readtable(file_path, ‘Delimiter’, ‘t’, ‘FileType’, ‘text’); % Read the dataset
% Loop through each bearing number
for j = 1:length(bearing_numbers)
bearing_no = bearing_numbers(j);
% Extract the bearing data
bearing_data = dataset{:, bearing_no};
% Calculate features
feature_matrix = [
max(bearing_data), % Max
min(bearing_data), % Min
mean(bearing_data), % Mean
std(bearing_data, 1), % Std
sqrt(mean(bearing_data.^2)), % RMS
compute_skewness(bearing_data), % Skewness
compute_kurtosis(bearing_data), % Kurtosis
max(bearing_data) / sqrt(mean(bearing_data.^2)), % CrestFactor
sqrt(mean(bearing_data.^2)) / mean(bearing_data) % FormFactor
];
df = array2table(feature_matrix.’); % Transpose for correct orientation
df.Properties.VariableNames = {‘Max’, ‘Min’, ‘Mean’, ‘Std’, ‘RMS’, ‘Skewness’, ‘Kurtosis’, ‘CrestFactor’, ‘FormFactor’};
df.Properties.RowNames = {[filename(1:end-4)]}; % Append bearing number
df.Properties.RowNames = {rowName};
% Append the table to the corresponding bearing’s feature matrix
switch bearing_no
case 1
if ~ismember(rowName, Time_feature_matrix1.Properties.RowNames)
Time_feature_matrix1 = [Time_feature_matrix1; df];
end
case 2
if ~ismember(rowName, Time_feature_matrix2.Properties.RowNames)
Time_feature_matrix2 = [Time_feature_matrix2; df];
end
case 3
if ~ismember(rowName, Time_feature_matrix3.Properties.RowNames)
Time_feature_matrix3 = [Time_feature_matrix3; df];
end
case 4
if ~ismember(rowName, Time_feature_matrix4.Properties.RowNames)
Time_feature_matrix4 = [Time_feature_matrix4; df];
end
% case 1
% Time_feature_matrix1 = [Time_feature_matrix1; df];
% case 2
% Time_feature_matrix2 = [Time_feature_matrix2; df];
% case 3
% Time_feature_matrix3 = [Time_feature_matrix3; df];
% case 4
% Time_feature_matrix4 = [Time_feature_matrix4; df];
end
end
end
% Define function to compute skewness
function skewness = compute_skewness(x)
n = length(x);
third_moment = sum((x – mean(x)).^3) / n;
s_3 = std(x, 1)^3;
skewness = third_moment / s_3;
end
% Define function to compute kurtosis
function kurtosis = compute_kurtosis(x)
n = length(x);
fourth_moment = sum((x – mean(x)).^4) / n;
s_4 = std(x, 1)^4;
kurtosis = fourth_moment / s_4 – 3;
end data processing MATLAB Answers — New Questions
View shows unassigned OKRs despite the filter is set for only a dedicated team
I created a view with a filter for one team; however, the view shows also unassigend OKRs. Any ideas what is causing this behavior?
I created a view with a filter for one team; however, the view shows also unassigend OKRs. Any ideas what is causing this behavior? Read More
How to speed up feature extraction task using GoogleNet?
I have some queries related to the speed of pretrained networks used for feature extraction.
Query 1:
I am using pre-trained network Googlenet (installed from MATLAB Add-Ons) for feature extraction from images. Right now, I am able to extract features from 5 frames in one second. If I do not have option to enhance the computational power of my machine (on which I am performing feature extraction), how can I increase the speed of feature extraction using the same network (means how can I be able to extract features from, say 20 or 30 frames per second).
Query 2:
What are the different factors that determine the speed of pre-trained networks for feature extraction? Can anyone please elaborate this one?
If my queries are not clear, feel free to comment.I have some queries related to the speed of pretrained networks used for feature extraction.
Query 1:
I am using pre-trained network Googlenet (installed from MATLAB Add-Ons) for feature extraction from images. Right now, I am able to extract features from 5 frames in one second. If I do not have option to enhance the computational power of my machine (on which I am performing feature extraction), how can I increase the speed of feature extraction using the same network (means how can I be able to extract features from, say 20 or 30 frames per second).
Query 2:
What are the different factors that determine the speed of pre-trained networks for feature extraction? Can anyone please elaborate this one?
If my queries are not clear, feel free to comment. I have some queries related to the speed of pretrained networks used for feature extraction.
Query 1:
I am using pre-trained network Googlenet (installed from MATLAB Add-Ons) for feature extraction from images. Right now, I am able to extract features from 5 frames in one second. If I do not have option to enhance the computational power of my machine (on which I am performing feature extraction), how can I increase the speed of feature extraction using the same network (means how can I be able to extract features from, say 20 or 30 frames per second).
Query 2:
What are the different factors that determine the speed of pre-trained networks for feature extraction? Can anyone please elaborate this one?
If my queries are not clear, feel free to comment. deep learning, pre-trained networks, googlenet, machine learning, feature extraction, speed of neural networks MATLAB Answers — New Questions
Extract table from image using copilot
Hi everyone,
I could need some helo 🙂
How do I use Copilot to copy a table from an image? The table is in a larger PDF and I want to copy it into Copilot using the screenshot function.
The table can look like this. For each row in column A, there can be several columns in B.
I can only do it if I give the AI so much information, e.g. what the rows in column A are called. I use Copilot which is for business (I think only the data protection is different here).
This is a part of a bigger task. But it already fails to read in the table. Even when I tell it to use OCR.
The table can be like this example. A is a merged cell. Its not two rows with ‘A’
0cloudy dayA1. info
A
2. more info
B1. info
B
2. more info
Hi everyone, I could need some helo :)How do I use Copilot to copy a table from an image? The table is in a larger PDF and I want to copy it into Copilot using the screenshot function.The table can look like this. For each row in column A, there can be several columns in B. I can only do it if I give the AI so much information, e.g. what the rows in column A are called. I use Copilot which is for business (I think only the data protection is different here). This is a part of a bigger task. But it already fails to read in the table. Even when I tell it to use OCR. The table can be like this example. A is a merged cell. Its not two rows with ‘A’0cloudy dayA1. infoA2. more infoB1. infoB2. more info Read More
MATLAB Coder crashes when opened in Ubuntu 18.04
Whenever I try to open the matlab coder from MATLAB, the MATLAB will crash and gives this error
Gtk-Message: 12:36:13.523: Failed to load module "overlay-scrollbar"
[0527/123639.510619:ERROR:gl_utils.cc(319)] [.WebGL-0x25dc130]GL Driver Message (OpenGL, Performance, GL_CLOSE_PATH_NV, High): GPU stall due to ReadPixels
I am using MATLAB R2022b, the Ubuntu version is 18.04.6 and the GNU version is gcc 11.4. Can somebody help explain this error?Whenever I try to open the matlab coder from MATLAB, the MATLAB will crash and gives this error
Gtk-Message: 12:36:13.523: Failed to load module "overlay-scrollbar"
[0527/123639.510619:ERROR:gl_utils.cc(319)] [.WebGL-0x25dc130]GL Driver Message (OpenGL, Performance, GL_CLOSE_PATH_NV, High): GPU stall due to ReadPixels
I am using MATLAB R2022b, the Ubuntu version is 18.04.6 and the GNU version is gcc 11.4. Can somebody help explain this error? Whenever I try to open the matlab coder from MATLAB, the MATLAB will crash and gives this error
Gtk-Message: 12:36:13.523: Failed to load module "overlay-scrollbar"
[0527/123639.510619:ERROR:gl_utils.cc(319)] [.WebGL-0x25dc130]GL Driver Message (OpenGL, Performance, GL_CLOSE_PATH_NV, High): GPU stall due to ReadPixels
I am using MATLAB R2022b, the Ubuntu version is 18.04.6 and the GNU version is gcc 11.4. Can somebody help explain this error? matlab coder, crash, ubuntu 18.04, matlab r2022b MATLAB Answers — New Questions
color themes for section
Hello
How can I set color themes attached to each section in documents. I have multiple section document with leading color (blue, green, yellow etc.). Captions, frames etc. should be connected with color theme to be organised but Word allows to set only one custom theme for whole document. How can i set it only for a section.
Hello How can I set color themes attached to each section in documents. I have multiple section document with leading color (blue, green, yellow etc.). Captions, frames etc. should be connected with color theme to be organised but Word allows to set only one custom theme for whole document. How can i set it only for a section. Read More
Excel add same formula easy way
Hello,
I have numbers in C1 till Z1 cells.
I would like to multiply these cells value with A1 value.
Example: +A1*C1, +A1*B1……+A1*Z1
I know I could write one by one.
But is there a simple way to add all cells to this formula?
So if I change A1 cell value all the values have to change. C1-Z1 is not fixed. It have to change when A1 changing.
Hello,I have numbers in C1 till Z1 cells.I would like to multiply these cells value with A1 value.Example: +A1*C1, +A1*B1……+A1*Z1I know I could write one by one. But is there a simple way to add all cells to this formula?So if I change A1 cell value all the values have to change. C1-Z1 is not fixed. It have to change when A1 changing. Read More
Windows 10 and 2003
I’m 46, building Virtual Box VM’s Internal Workstaion Servers on Windows NT 3.5, NT 4.0, Windows XP and Windows Server 2003. From a retired Gold Partner Shared Source Microsoft Contract. I would like to know it anyone is intrested and if so where is the ACM at a NAT downstream internet if there is such a thing or what I would need to connect older computers to the internet.? My Startup Idea is a Virginia Computer Muesum with older computers such as the MITS Altair, OpenVMS, CTSS from XKL and other UNIX and System V varients. Using Alpha, VAX, MIPS and x86 architechure. I’m trying to find a Jazz Computer. Also Novell DR-DOS and MS-DOS and NetWare. I’ve been working with SCO Skunkware and OpenServer v5 source in Windows Server 2003 Services for UNIX also Windows Embedded 2009 Standard to use the SDL engineering files. I’ve made a Lab03_n Buffer Lab for Vista. from documentation from the beta wiki. And I’ve been trying to contact Microsoft’s Code Center Priemium, but nobody replies. I participated in Microsoft Bizspark with came with Windows CE Shared Source editions Also my HP FTP cache. Also I make FreeBSD, NetBSD, and OpenBSD VM’s with the Common Desktop Environment. VM’s with OpenStep, OS/2 and System V from AT&T. I try to find all of the telnet servers. Also Perl, Apache, and GNU caches.
Thank You,
Jonathan
I’m 46, building Virtual Box VM’s Internal Workstaion Servers on Windows NT 3.5, NT 4.0, Windows XP and Windows Server 2003. From a retired Gold Partner Shared Source Microsoft Contract. I would like to know it anyone is intrested and if so where is the ACM at a NAT downstream internet if there is such a thing or what I would need to connect older computers to the internet.? My Startup Idea is a Virginia Computer Muesum with older computers such as the MITS Altair, OpenVMS, CTSS from XKL and other UNIX and System V varients. Using Alpha, VAX, MIPS and x86 architechure. I’m trying to find a Jazz Computer. Also Novell DR-DOS and MS-DOS and NetWare. I’ve been working with SCO Skunkware and OpenServer v5 source in Windows Server 2003 Services for UNIX also Windows Embedded 2009 Standard to use the SDL engineering files. I’ve made a Lab03_n Buffer Lab for Vista. from documentation from the beta wiki. And I’ve been trying to contact Microsoft’s Code Center Priemium, but nobody replies. I participated in Microsoft Bizspark with came with Windows CE Shared Source editions Also my HP FTP cache. Also I make FreeBSD, NetBSD, and OpenBSD VM’s with the Common Desktop Environment. VM’s with OpenStep, OS/2 and System V from AT&T. I try to find all of the telnet servers. Also Perl, Apache, and GNU caches.Thank You,Jonathan Read More