Category: News
Request about comment char % vs C++ //
I’m switching back and forth between C(++) for my Arduino Pico and MatLab App Designer
It would be really nice if when I type // to start a comment you could switch it to %
You have so much help in completion of typed stuff this shouldn’t be hard for next release.
I’m getting the hang of MatLab but it can still be irritationg for "real" programmers. :-)I’m switching back and forth between C(++) for my Arduino Pico and MatLab App Designer
It would be really nice if when I type // to start a comment you could switch it to %
You have so much help in completion of typed stuff this shouldn’t be hard for next release.
I’m getting the hang of MatLab but it can still be irritationg for "real" programmers. 🙂 I’m switching back and forth between C(++) for my Arduino Pico and MatLab App Designer
It would be really nice if when I type // to start a comment you could switch it to %
You have so much help in completion of typed stuff this shouldn’t be hard for next release.
I’m getting the hang of MatLab but it can still be irritationg for "real" programmers. 🙂 comment tag MATLAB Answers — New Questions
the whole interface in windows 10 changed automatically after I did the last update
this morning the windows 10 interface was as usual and we use it, after that when I came home and turned on my laptop it automatically started to update to windows 10, I restarted it 3 times and when the whole interface turned on completely changed, it doesn’t even look like windows 10, it’s like it’s a kind of minimized version of windows even if I didn’t even touch it, it changed by itself, I even looked in system propietes and made it look as good as possible possible but not to change the interface, mine is Version 22H2, does anyone know how to solve this problem?
I think it would be good to simply update to Windows 11 and automatically install the new graphics in Windows 11.
this morning the windows 10 interface was as usual and we use it, after that when I came home and turned on my laptop it automatically started to update to windows 10, I restarted it 3 times and when the whole interface turned on completely changed, it doesn’t even look like windows 10, it’s like it’s a kind of minimized version of windows even if I didn’t even touch it, it changed by itself, I even looked in system propietes and made it look as good as possible possible but not to change the interface, mine is Version 22H2, does anyone know how to solve this problem?I think it would be good to simply update to Windows 11 and automatically install the new graphics in Windows 11. Read More
Filtering specific wavenumber range in Fourier Transforms result.
Hello. I am currently dealing with the interference signal from the Coherence Scanning interferometer. My plan for processing the signal is as follows:
Applying a discriminator to normalize the signal to the center amplitude at zero,
Zero padding the signal to increase the resolution in the frequency domain.
Doing FFT to convert the interference signal from scanning distance to spatial frequency domain ( um to rad/um).
Filtering out the interested domain ( my light source is 380 nm to 780 nm, so I only care about the range from 8 ~ 17 rad/um).
Calculating and unwrapping the phase from the filtered result.
Compensating the phase with the phase error calculated by material properties.
Doing IFFT with the compensated phase to convert it back to an interference signa
Figure 1. Original phase from interferogram.
Figure 2. Calculated phase error.
The problem is the wavenumber range of the interference signal is 0 to ~ 25 rad/um while the phase error I calculated only supports from 6~30 rad/um. I would like to know if there are any ways to take out entirely then wavenumber range from 8 ~17 rad/um and do the rest of my work on this range. I also shared my code and raw data here.
Thank you all for reading my problem. Hope you guys have a nice weekend.Hello. I am currently dealing with the interference signal from the Coherence Scanning interferometer. My plan for processing the signal is as follows:
Applying a discriminator to normalize the signal to the center amplitude at zero,
Zero padding the signal to increase the resolution in the frequency domain.
Doing FFT to convert the interference signal from scanning distance to spatial frequency domain ( um to rad/um).
Filtering out the interested domain ( my light source is 380 nm to 780 nm, so I only care about the range from 8 ~ 17 rad/um).
Calculating and unwrapping the phase from the filtered result.
Compensating the phase with the phase error calculated by material properties.
Doing IFFT with the compensated phase to convert it back to an interference signa
Figure 1. Original phase from interferogram.
Figure 2. Calculated phase error.
The problem is the wavenumber range of the interference signal is 0 to ~ 25 rad/um while the phase error I calculated only supports from 6~30 rad/um. I would like to know if there are any ways to take out entirely then wavenumber range from 8 ~17 rad/um and do the rest of my work on this range. I also shared my code and raw data here.
Thank you all for reading my problem. Hope you guys have a nice weekend. Hello. I am currently dealing with the interference signal from the Coherence Scanning interferometer. My plan for processing the signal is as follows:
Applying a discriminator to normalize the signal to the center amplitude at zero,
Zero padding the signal to increase the resolution in the frequency domain.
Doing FFT to convert the interference signal from scanning distance to spatial frequency domain ( um to rad/um).
Filtering out the interested domain ( my light source is 380 nm to 780 nm, so I only care about the range from 8 ~ 17 rad/um).
Calculating and unwrapping the phase from the filtered result.
Compensating the phase with the phase error calculated by material properties.
Doing IFFT with the compensated phase to convert it back to an interference signa
Figure 1. Original phase from interferogram.
Figure 2. Calculated phase error.
The problem is the wavenumber range of the interference signal is 0 to ~ 25 rad/um while the phase error I calculated only supports from 6~30 rad/um. I would like to know if there are any ways to take out entirely then wavenumber range from 8 ~17 rad/um and do the rest of my work on this range. I also shared my code and raw data here.
Thank you all for reading my problem. Hope you guys have a nice weekend. fft, signal processing MATLAB Answers — New Questions
Preprocessing PTB-XL dataset in MATLAB
Hello! How can I open specific records from PTB-XL dataset and process them in MATLAB? What I want to do is to first load the ECG leads loaded in .dat file one by one so that I can preprocess them, such as applying digital filters, prior to the creation of composite lead (mixture of 12-lead ECG in one waveform). I have WFDB tool from Physionet. However, it is not working on the dataset. I have the dataset downloaded in my laptop. Thank you!Hello! How can I open specific records from PTB-XL dataset and process them in MATLAB? What I want to do is to first load the ECG leads loaded in .dat file one by one so that I can preprocess them, such as applying digital filters, prior to the creation of composite lead (mixture of 12-lead ECG in one waveform). I have WFDB tool from Physionet. However, it is not working on the dataset. I have the dataset downloaded in my laptop. Thank you! Hello! How can I open specific records from PTB-XL dataset and process them in MATLAB? What I want to do is to first load the ECG leads loaded in .dat file one by one so that I can preprocess them, such as applying digital filters, prior to the creation of composite lead (mixture of 12-lead ECG in one waveform). I have WFDB tool from Physionet. However, it is not working on the dataset. I have the dataset downloaded in my laptop. Thank you! ecg, signal processing, physionet, dataset, wfdb tool MATLAB Answers — New Questions
How do I find the name of the currently-running Matlab LiveScript?
I have a Matlab LiveScript case study which calls many .m functions. At the bottom of the LiveScript, I need to save the results of my case study to a .mat file, and I want the name of the .mat file to be the same as my case study file. For example, if my case study file is named
Case_Something.mlx, then I want the associated results file to be named
Case_Something.mat
I have been relying on the users to change the name of the .mat file for each case they create, but if they forget, they overwrite someone else’s .mat file. If I can recover the name "Case_Something" from the .mlx filename runnning, I can make this automatic. All descriptions of how to do this are examples for .m files, not LiveScripts, and the solutions don’t work.
Is there a way to do this with LiveScripts?
Thanks for any help.I have a Matlab LiveScript case study which calls many .m functions. At the bottom of the LiveScript, I need to save the results of my case study to a .mat file, and I want the name of the .mat file to be the same as my case study file. For example, if my case study file is named
Case_Something.mlx, then I want the associated results file to be named
Case_Something.mat
I have been relying on the users to change the name of the .mat file for each case they create, but if they forget, they overwrite someone else’s .mat file. If I can recover the name "Case_Something" from the .mlx filename runnning, I can make this automatic. All descriptions of how to do this are examples for .m files, not LiveScripts, and the solutions don’t work.
Is there a way to do this with LiveScripts?
Thanks for any help. I have a Matlab LiveScript case study which calls many .m functions. At the bottom of the LiveScript, I need to save the results of my case study to a .mat file, and I want the name of the .mat file to be the same as my case study file. For example, if my case study file is named
Case_Something.mlx, then I want the associated results file to be named
Case_Something.mat
I have been relying on the users to change the name of the .mat file for each case they create, but if they forget, they overwrite someone else’s .mat file. If I can recover the name "Case_Something" from the .mlx filename runnning, I can make this automatic. All descriptions of how to do this are examples for .m files, not LiveScripts, and the solutions don’t work.
Is there a way to do this with LiveScripts?
Thanks for any help. live script, filename, running, save, with same name as MATLAB Answers — New Questions
how can we calculate surface area for this leaf in matlab?
<</matlabcentral/answers/uploaded_files/10685/images.jpg>>
only the leaf<</matlabcentral/answers/uploaded_files/10685/images.jpg>>
only the leaf <</matlabcentral/answers/uploaded_files/10685/images.jpg>>
only the leaf image segmentation, color segmentation, leaf MATLAB Answers — New Questions
Check This Out! (CTO!) Guide (June 2024)
Hi everyone! Brandon Wilson here once again with this month’s “Check This Out!” (CTO!) guide.
These posts are only intended to be your guide, to lead you to some content of interest, and are just a way we are trying to help our readers a bit more, whether that is learning, troubleshooting, or just finding new content sources! We will give you a bit of a taste of the blog content itself, provide you a way to get to the source content directly, and help to introduce you to some other blogs you may not be aware of that you might find helpful.
From all of us on the Core Infrastructure and Security Tech Community blog team, thanks for your continued reading and support!
Title: Stop Worrying and Love the Outage, Vol III: Cached Logons
Source: Ask the Directory Services Team
Author: Chris Cartwright
Publication Date: 6/20/24
Content excerpt:
This is the third post in a series where I try to provide the IT community with some tools and verbiage that will hopefully save you and your business many hours, dollars, and frustrations. Occasionally, we get cases for users working remotely that are unable to log on with a message that the domain is not available. More often than not, this is caused by an overly enthusiastic Cached Logon configuration.
Title: General Availability of SQL FCI and AG Features SQL Server Enabled by Azure Arc
Source: Azure Arc
Author: Abdullah Mamun
Publication Date: 6/14/24
Content excerpt:
We have good news. Two business continuity features for SQL Server enabled by Azure Arc are now generally available:
View Failover Cluster Instance
Manage Availability Group
Title: Announcing enhanced multicloud integration enabled by Azure Arc
Source: Azure Arc
Author: Meagan McCrory
Publication Date: 6/18/24
Content excerpt:
We are thrilled to announce a new set of capabilities for multicloud customers, making it easier than ever to manage cloud resources from a centralized platform. With the adaptive cloud approach enabled by Azure Arc, customers can quickly and easily access and manage their workloads across Azure and AWS through the multicloud connector, which is free to use!
Title: WS2012 ESU Updates
Source: Azure Arc
Author: Aurnov Chattopadhyay
Publication Date: 6/24/24
Content excerpt:
We have a myriad of key updates for customers enrolled in WS2012/R2 ESUs enabled by Azure Arc! As we continue to refine and expand the offer, investments have focused on reducing friction and improve the usability of WS2012/R2 ESUs enabled by Azure Arc. We’re excited to announce our brand-new usage view, preview of the transition scenario, and improvements to pre-requisites, billing, and included capabilities.
Source: Azure Compute
Author: Micah McKittrick
Publication Date: 6/16/24
Content excerpt:
Today we are announcing the public preview of upgrade policies for Virtual Machine Scale Sets with Flexible Orchestration. Upgrade policies allow for more granular control over the upgrade process, ensuring that your services remain available and responsive during updates.
Title: Azure Pricing: How to navigate Azure pricing options and resources
Source: Azure Governance and Management
Author: Kyle Ikeda
Publication Date: 6/13/24
Content excerpt:
Now, we will dive deeper into how Azure pricing works and how you can learn more about it. We will use the example of Contoso, a hypothetical digital media company, to show how they use Azure pricing resources to guide their migration to the cloud.
Title: Azure Pricing: How to estimate Azure project costs
Source: Azure Governance and Management
Author: Kyle Ikeda
Publication Date: 6/13/24
Content excerpt:
Lets understand how you can calculate your project costs when migrating or building a new solution in Azure. We will continue to use the example of Contoso, a hypothetical digital media company, and how they use Azure pricing resources to guide their migration to the cloud.
Title: Azure pricing: How to calculate costs of Azure products and services
Source: Azure Governance and Management
Author: Kyle Ikeda
Publication Date: 6/13/24
Content excerpt:
In our previous blogs we explained the Azure pricing structure and how customers can estimate their project costs when migrating to Azure or building a cloud-native application. We introduced readers to Azure Migrate, the Total Cost of Ownership (TCO) Calculator, pay-as-you-go account, and the Azure Architecture Center. Now we will go a step further to address the needs of a customer who has decided to migrate their workloads or deploy cloud-native solutions and wants to budget for the specific Azure services they’ll be using. We will continue using the example of our digital media company, Contoso, and how they use Azure services to feel confident they’re getting the best value at every stage of their cloud journey.
Title: Azure pricing: How to optimize costs for your Azure workloads
Source: Azure Governance and Management
Author: Kyle Ikeda
Publication Date: 6/13/24
Content excerpt:
In this final installment of our blog series, we will discover how to optimize the value of your Azure investment. Through a mixture of optimization best practices and Azure tools, we will see how the digital media company Contoso maximizes their cloud spend to get more out of their workloads.
Title: Announcing Zone Redundancy and Multi-Region Capabilities in Azure Landing Zones
Source: Azure Governance and Management
Author: Paul Grimley
Publication Date: 6/18/24
Content excerpt:
In today’s dynamic business environment, the resilience of cloud infrastructure is not just a preference but a necessity. We are thrilled to announce the latest enhancements in Azure Landing Zones with the rollout of the first phase of zone redundancy and multi-region support, designed to meet the high demands for availability and resilience in your cloud deployments. We also are announcing our plans and subsequent roadmap to make our ALZ Bicep and ALZ Terraform implementation options zone-redundant by the end of the calendar year (2024).
Title: Announcing the General Availability of Change Actor
Source: Azure Governance and Management
Author: Ian Carter
Publication Date: 6/19/24
Content excerpt:
Identifying who made a change to your Azure resources and how the change was made just became easier! With Change Analysis, you can now see who initiated the change and with which client that change was made, for changes across all your tenants and subscriptions.
Title: Announcing Azure Monitoring Agent support in Azure Landing Zones
Source: Azure Governance and Management
Author: Arjen Huitema
Publication Date: 6/21/24
Content excerpt:
Hello and welcome to another blog post about Azure Landing Zones, the best practice framework for accelerating your cloud adoption journey. In this post, I will share with you some of the latest updates and enhancements that we have made to Azure Landing Zones.
Title: Controlling Data Egress in Azure
Source: Azure Networking
Author: Craig DuBose
Publication Date: 6/10/24
Content excerpt:
Regulated companies impose stringent requirements on data governance to prevent data exfiltration. As a Cloud Architect, ensuring the safe and efficient exit of data from our network to external destinations is paramount. This document aims to provide a comprehensive guide to the strategies, best practices, and tools we employ at various customers to maintain robust security measures.
Title: Azure Virtual Network Manager mesh and direct connectivity are generally available
Source: Azure Networking
Author: Andrea Michael
Publication Date: 6/13/24
Content excerpt:
Azure Virtual Network Manager’s mesh connectivity configuration and direct connectivity option in the hub and spoke connectivity configuration are generally available in all public regions! Visit our public documentation on connectivity configurations to learn more about Azure Virtual Network Manager’s connectivity configuration concepts, how they work, and steps to get started.
Title: Build scalable cross-subscription applications with Azure Load Balancer
Source: Azure Networking
Author: Mahip Deora
Publication Date: 6/20/24
Content excerpt:
We are thrilled to announce that Azure cross-subscription Load Balancer is now available for public preview in all Azure public and national cloud regions. This capability enables you to have your Azure Load Balancer components in different subscriptions. For example, you could have the load balancer’s frontend or backend instances in a different subscription from the one that the load balancer belongs to.
Title: Optimizing Data Flow: Leveraging ExpressRoute FastPath for reduced latency and increased throughput
Source: Azure Networking
Author: Cynthia Treger
Publication Date: 6/27/24
Content excerpt:
This article examines the data flow and performance benefits of Microsoft Azure’s ExpressRoute and ExpressRoute FastPath features in Hub & Spoke environments. It outlines the default asymmetric data routing and the enhancements achieved through FastPath. Key updates and constraints for FastPath, as well as IP address limits and monitoring metrics, are also discussed.
Title: A Closer Look at Azure WAF’s Data Masking Capabilities for Azure Front Door
Source: Azure Network Security
Author: David Frazee
Publication Date: 6/13/24
Content excerpt:
The Azure Web Application Firewall (WAF) on Azure Front Door offers centralized protection for your web applications against vulnerabilities and threats. The effectiveness of your Azure WAF in managing traffic can be assessed through WAF logs stored in specified locations such as a Log Analytics Workspace or Storage Accounts. These logs document requests that have been either matched or blocked by WAF rules. This data is crucial for monitoring, auditing, and resolving issues. By default, WAF logs are maintained in a plain text format for user convenience and analysis. However, these client requests might include sensitive personal data, like personally identifiable information (PII), which can include names, addresses, contact details, and financial information. Without proper sanitization, logs containing such PII could be exposed to unauthorized access. To address this, Azure Front Door WAF now offers sensitive data protection through log scrubbing. This feature is Generally Available as of June 20, 2024. WAF log scrubbing employs a customizable rules engine to pinpoint and redact sensitive portions within the requests, replacing them with a series of asterisks (******) to prevent data exposure. This blog will explains the log scrubbing process and provides practical examples for a more comprehensive understanding.
Title: Seamless Recovery: How to Automate Azure VM Evictions Start Ups with Azure Functions
Source: Core Infrastructure and Security
Author: Werner Rall
Publication Date: 6/10/24
Content excerpt:
Azure has some incredible services that we can use for all business sizes and even budgets. One of these amazing services we find is a highly discounted virtual machine called a spot instance. A spot instance in essence is a special kind of Virtual Machine that can at any time be evicted when capacity is required for the standard or default Virtual Machines. Why would I want to run it then? Because it is super cheap! How cheap? In some cases, up to 90% Off.
Title: Dynamically Updating Azure IP Ranges with PowerShell and DevOps
Source: Core Infrastructure and Security
Author: Werner Rall
Publication Date: 6/17/24
Content excerpt:
Keeping your Azure IP ranges up-to-date is crucial for maintaining the security and efficiency of your cloud environment. This blog post will guide you through the process of dynamically updating your Azure IP ranges using the official Azure documentation, PowerShell scripts, and DevOps practices.
Title: Simplifying Azure Diagnostics with Category Groups and the New Built-In Policies
Source: Core Infrastructure and Security
Author: Heinrich Gantenbein, Luke Alderman
Publication Date: 6/24/24
Content excerpt:
Let’s talk about how you use it in your organization. We are not covering the mechanics here; the documentation covers that. Instead, we’ll cover variations to deployment.
Title: Update on MFA requirements for Azure sign-in
Source: Core Infrastructure and Security
Author: Naj Shahid
Publication Date: 6/27/24
Content excerpt:
We would like to share an update on the announcement that Microsoft will require multi-factor authentication (MFA) for users signing into Azure. In this post, we share clarifications on the scope, timing and implementation details, along with guidance for preparation.
Title: How to budget your Azure cloud spend with Microsoft Cost Management
Source: FinOps
Author: Gregor Wohlfarter
Publication Date: 6/11/24
Content excerpt:
If you are using Azure for your cloud applications, you might be wondering how to manage your costs effectively. You might have heard of Microsoft Cost Management, a service that helps you monitor, analyze, and optimize your cloud spending. But did you know that Microsoft Cost Management also offers a powerful feature called Budgets?
Title: Azure Advisor Cost Optimization workbook – April release
Source: FinOps
Author: Seif Bassem
Publication Date: 6/13/24
Content excerpt:
The Azure Cost Optimization workbook is a powerful tool that helps you monitor and optimize your Azure costs. It provides you with a comprehensive overview of your Azure environment and offers actionable insights and recommendations based on the Well-Architected Framework Cost Optimization pillar.
Title: Use budget management and forecasting to bring your FinOps practice into the era of AI
Source: FinOps
Author: Antonio Ortoll
Publication Date: 6/21/24
Content excerpt:
As you expand your use of the cloud, cost management becomes increasingly important. But lack of visibility into spending practices can hamper your cloud cost management efforts. With cloud costs constantly fluctuating and decision-making often decentralized in large organizations, gaining visibility into expenses can be challenging. The right cloud management tools can help reveal and eliminate hidden costs associated with the cloud and provide a holistic view of all your cloud cost centers.
Title: Empowering cloud efficiency through FinOps
Source: FinOps
Author: Sonia Cuff, Arthur Clares, Thomas Lewis
Publication Date: 6/25/24
Content excerpt:
Now available on-demand, for free, is the recording of our Azure Webinar session “Empowering cloud efficiency through FinOps”.
Title: Announcing new Windows Autopilot onboarding experience for government and commercial customers
Source: Intune Customer Success
Author: Maggie Dakeva
Publication Date: 6/5/24
Content excerpt:
Today, Intune is releasing a new Autopilot profile experience, Windows Autopilot device preparation, which enables IT admins to deploy configurations efficiently and consistently and removes the complexity of troubleshooting for both commercial and government (Government Community Cloud (GCC) High, and U.S. Department of Defense (DoD)) organizations and agencies.
Title: Granular RBAC permissions for endpoint security workloads
Source: Intune Customer Success
Author: Laura Arrizza
Publication Date: 6/20/24
Content excerpt:
The built-in role ‘Endpoint Security Manager’ is used to manage policies and features within the Microsoft Intune admin center Endpoint security blade or, admin actions can be limited by using the custom role with the ‘Security baselines’ permission.
With Intune’s June (2406) release, we’ll begin adding new permissions for each endpoint security workload to allow for additional granularity and control. The ‘Security baselines’ permission previously included all security policies and now, it will only include security workloads that do not have their own permission.
Title: Single-region deployment without Global Reach, using Secure Virtual WAN Hub with Routing-Intent
Source: ITOps Talk
Author: Jason Medina
Publication Date: 6/7/24
Content excerpt:
This article describes the best practices for connectivity and traffic flows with single-region Azure VMware Solution when using Azure Secure Virtual WAN with Routing Intent. You learn the design details of using Secure Virtual WAN with Routing-Intent without Global Reach. This article breaks down Virtual WAN with Routing Intent topology from the perspective of an Azure VMware Solution private cloud, on-premises sites, and Azure native. The implementation and configuration of Secure Virtual WAN with Routing Intent are beyond the scope and aren’t discussed in this document.
Title: Dual-region deployments using Secure Virtual WAN Hub with Routing-Intent without Global Reach
Source: ITOps Talk
Author: Jason Medina
Publication Date: 6/25/24
Content excerpt:
This article describes the best practices for connectivity, traffic flows, and high availability of dual-region Azure VMware Solution when using Azure Secure Virtual WAN with Routing Intent. You learn the design details of using Secure Virtual WAN with Routing-Intent, without Global Reach. This article breaks down Virtual WAN with Routing Intent topology from the perspective of Azure VMware Solution private clouds, on-premises sites, and Azure native. The implementation and configuration of Secure Virtual WAN with Routing Intent are beyond the scope and aren’t discussed in this document.
Title: New and Free Active Directory Domain Services Applied Skill Credential
Source: ITOps Talk
Author: Orin Thomas
Publication Date: 6/25/24
Content excerpt:
Today Microsoft launched a brand new Applied Skill Credential related to Active Directory Domain Services Administration.
Title: Microsoft Copilot in Azure Series – Copilot Access Management
Source: ITOps Talk
Author: Pierre Roman
Publication Date: 6/27/24
Content excerpt:
Today, we’re diving into Microsoft Copilot in Azure. It’s like having a super-smart assistant in the cloud! It’s an AI-powered tool that’s all about making your life easier when you’re working with Azure, when you’re navigating the Azure portal, or using the Azure mobile app. Now, keep in mind, at the time of recording this, Copilot in Azure is still in preview. That means it’s like a sneak peek, and there are some extra terms you have to check out before you jump in. This Copilot in Azure can be a real lifesaver. It knows a ton about Azure’s services and resources, it also has access to all the information in Azure Resource Graph. It’s like having a cheat sheet for the cloud. You can ask it questions about your environment, and it’ll give you answers tailored to your own Azure resources, and your level of access.
Title: Microsoft Entra ID Governance licensing clarifications
Source: Microsoft Entra (Azure AD)
Author: Kaitlin Murphy
Publication Date: 6/19/24
Content excerpt:
In the past few weeks, we’ve announced the general availability of Microsoft Entra External ID and Microsoft Entra ID multi-tenant collaboration. We’ve received requests for more detail from some of you regarding licensing, so I’d like to provide additional clarity for both of these scenarios.
Title: How to break the token theft cyber-attack chain
Source: Microsoft Entra (Azure AD)
Author: Alex Weinert
Publication Date: 6/20/24
Content excerpt:
We’ve written a lot about how attackers try to break passwords. The solution to password attacks—still the most common attack vector for compromising identities—is to turn on multifactor authentication (MFA). But as more customers do the right thing with MFA, actors are going beyond password-only attacks. So, we’re going to publish a series of articles on how to defeat more advanced attacks, starting with token theft. In this article, we’ll start with some basics on how tokens work, describe a token theft attack, and then explain what you can do to prevent and mitigate token theft now.
Title: Move to cloud authentication with the AD FS migration tool!
Source: Microsoft Entra (Azure AD)
Author: Melanie Maynes
Publication Date: 6/26/24
Content excerpt:
We’re excited to announce that the migration tool for Active Directory Federation Service (AD FS) customers to move their apps to Microsoft Entra ID is now generally available! Customers can begin updating their identity management with more extensive monitoring and security infrastructure by quickly identifying which applications are capable of being migrated and assessing all their AD FS applications for compatibility.
Title: Introducing the Microsoft Entra PowerShell module
Source: Microsoft Entra (Azure AD)
Author: Steve Mutungi
Publication Date: 6/27/24
Content excerpt:
We’re thrilled to announce the public preview of the Microsoft Entra PowerShell module, a new high-quality and scenario-focused PowerShell module designed to streamline management and automation for the Microsoft Entra product family. In 2021, we announced that all our future PowerShell investments would be in the Microsoft Graph PowerShell SDK. Today, we’re launching the next major step on this journey. The Microsoft Entra PowerShell module (Microsoft.Graph.Entra) is a part of our ongoing commitment and increased investment in Microsoft Graph PowerShell SDK to improve your experience and empower automation with Microsoft Entra.
Title: The guide to Microsoft Intune resources
Source: Microsoft Intune
Author: Lior Bela
Publication Date: 6/10/24
Content excerpt:
Whether mobile or desktop, virtual or physical, in the office or out in the world, Microsoft Intune can help you secure access to your company resources and keep your workforce productive from a single pane of glass. While this is an awesome capability, it also brings some complexity with it. Customers have asked for a guide that spells out explicitly what they should do to get started with Intune. Here you’ll find the resources you need before, during, and after your Intune deployment.
Title: Windows Server 2025 Storage Performance with Diskspd
Source: Storage at Microsoft
Author: Dan Cuomo
Publication Date: 6/14/24
Content excerpt:
If you manage on-premises servers, you know one of the final tests you run before going to production is a performance test. You want to ensure that when you migrate virtual machines to that host, or you install SQL server on that machine, that you’re going to get the expected IOPS, the expected latency, or whatever other metrics you deem important for your business’ workloads.
So, after all the group policies have been applied, firewall rules are set, agents are installed and configured (or anything else you do in your deployment playbook), you download Diskspd, NTTTCP, and other performance testing tools you use to test this server compared to your baseline (if you don’t do this, you should be!).
Title: Myths and misconceptions: Windows 11 and cloud native
Source: Windows IT Pro
Author: Harjit Dhaliwal
Publication Date: 6/11/24
Content excerpt:
Let’s discuss the myths around the move to cloud-native management, with Microsoft Intune and Microsoft Entra ID, and Windows 11. In this post, we will address some common questions and misconceptions by sharing insights and perspectives gathered from the conversations we’ve had with organizations of all sizes from around the globe this past year.
Title: Deprecation of WSUS driver synchronization
Source: Windows IT Pro
Author: Paul Reed
Publication Date: 6/28/24
Content excerpt:
If you’ve been using driver synchronization updates via Windows Server Update Services (WSUS), you may already be aware of the newest cloud-based driver services. Many are already enjoying the benefits of managing their driver updates with Microsoft cloud. This means that we’ll soon be deprecating WSUS driver synchronization.
Title: Windows news you can use: June 2024
Source: Windows IT Pro
Author: Thomas Trombley
Publication Date: 6/28/24
Content excerpt:
Here are the latest Windows 11 features, capabilities, services, and tools that you can start using this month. Based on your feedback, we’ve also included information to help you catch up on lifecycle milestones and preview opportunities. We hope that this paints a better timeline and helps you embrace a secure, cloud-native future with Windows 11.
Title: Introducing GPU Innovations with Windows Server 2025
Source: Windows OS Platform
Author: Afia Boakye, Rebecca Wambua
Publication Date: 6/6/24
Content excerpt:
AI empowers businesses to innovate, streamline operations, and deliver exceptional value. With the upcoming Windows Server 2025 Datacenter and Azure Stack HCI 24H2 releases, Microsoft is empowering customers to lead their businesses through the AI revolution.
Title: Hyper-V live migration network selection in Windows Server 2025
Source: Windows OS Platform
Author: Steven Ekren
Publication Date: 6/13/24
Content excerpt:
Microsoft continues to bring innovation and improvements to our Hyper-V platform. Live migration has been around for a while and is a key component to managing virtual machines (VMs). With Windows Server 2025 you will see improvements that make Hyper-V more reliable, increase scale, and improve performance. This article covers an improvement with Live Migration, and you can expect to see more articles soon to cover other innovations for Windows Server 2025.
Title: Windows Server 2025 and beyond
Source: Windows OS Platform
Author: Dan Cuomo
Publication Date: 6/14/24
Content excerpt:
This article focuses on what’s new and what’s coming in Windows Server 2025.
Title: Use GPUs with Clustered VMs through Direct Device Assignment
Source: Windows OS Platform
Author: Afia Boakye
Publication Date: 6/19/24
Content excerpt:
In the rapidly evolving landscape of artificial intelligence (AI), the demand for more powerful and efficient computing resources is ever-increasing. Microsoft is at the forefront of this technological revolution, empowering customers to harness the full potential of their AI workloads with their GPUs. GPU virtualization makes the ability to process massive amounts of data quickly and efficiently possible. Using GPUs with clustered VMs through DDA (Discrete Device Assignment) becomes particularly significant in failover clusters, offering direct GPU access.
Title: Microsoft options for VMware migration
Source: Windows OS Platform
Author: Dan Cuomo
Publication Date: 6/21/24
Content excerpt:
Recent developments in the on-premises virtualization market have unsettled users and prompted a re-evaluation of their organization’s strategy. Microsoft provides a robust set of solutions tailored to your specific goals and requirements. During this session, we will delve into these options, emphasizing the long-term advantages of choosing Microsoft & Hyper-V.
Title: Improving server security and productivity with Hotpatching
Source: Windows OS Platform
Author: Dan Cuomo
Publication Date: 6/28/24
Content excerpt:
When it comes to installing securing updates, organizations are often concerned about the potential for business disruption and reduced system availability. This is a thing of the past with Hotpatching!
Come see how Hotpatching enables you to apply critical security updates without rebooting your servers, reducing downtime and improving productivity. Hear from the Xbox team, who have successfully adopted Hotpatching for the online gaming platform. Discover what is in store as we expand the service and make it more broadly available.
Previous CTO! Guides:
CIS Tech Community-Check This Out! (CTO!) Guides
Additional resources:
Azure documentation
Azure pricing calculator (VERY handy!)
Microsoft Azure Well-Architected Framework
Microsoft Cloud Adoption Framework
Windows Server documentation
Windows client documentation for IT Pros
PowerShell documentation
Core Infrastructure and Security blog
Microsoft Tech Community blogs
Microsoft technical documentation (Microsoft Docs)
Sysinternals blog
Microsoft Learn
Microsoft Support (Knowledge Base)
Microsoft Archived Content (MSDN/TechNet blogs, MSDN Magazine, MSDN Newsletter, TechNet Newsletter)
Microsoft Tech Community – Latest Blogs –Read More
GenAI Mastery: Crafting Robust Enterprise Solutions with PromptFlow and LangChain
In the rapidly evolving landscape of artificial intelligence, generative AI (GenAI) has emerged as a game-changer for enterprises. However, building end-to-end GenAI applications that are robust, observable, and scalable can be challenging. This blog post will guide you through the process of creating enterprise-grade GenAI solutions using PromptFlow and LangChain, with a focus on observability, trackability, model monitoring, debugging, and autoscaling. The purpose of this blog to give you an idea that even if you use LangChain or OpenAI SDK or Llama Index you can still use PromptFlow and AI Studio for enterprise grade GenAI applications.
Understanding Enterprise GenAI Applications
Enterprise GenAI applications are AI-powered solutions that can generate human-like text, images, or other content based on input prompts. These applications need to be:
Reliable
Secure
Scalable
Key considerations include:
Data privacy
Performance at scale
Integration with existing enterprise systems
PromptFlow and LangChain: A Powerful Combination
PromptFlow
A toolkit for building AI applications with large language models (LLMs)
Offers features like prompt management and flow orchestration
LangChain
A framework for developing applications powered by language models
Provides tools for prompt optimization and chaining multiple AI operations
Together, these frameworks offer a robust foundation for enterprise GenAI applications:
PromptFlow excels in managing complex prompt workflows
LangChain provides powerful tools for interacting with LLMs and structuring applications
Building the Application: A Step-by-Step Approach
Define your application requirements and use cases: Defining your application requirements and use cases is a pivotal step in developing a successful Retrieval-Augmented Generation (RAG) system for document processing. Begin by identifying the core objectives of your application, such as the types of documents it will handle, the specific data it needs to extract, and the desired output format. Clearly outline the use cases, such as automated report generation, data extraction for business intelligence, or enhancing customer support through better information retrieval. Detail the functional requirements, including the ability to parse various document formats, the accuracy and speed of the retrieval process, and the integration capabilities with existing systems. Additionally, consider non-functional requirements like scalability, security, and user accessibility. By thoroughly defining these aspects, you create a roadmap that guides the development process, ensuring the final application meets user expectations and delivers tangible value.
Set up your development environment with PromptFlow and LangChain: Setting up your development environment with PromptFlow and LangChain is essential for building an efficient Retrieval-Augmented Generation (RAG) application. Start by ensuring you have a robust development setup, including a compatible operating system, necessary software dependencies, and a version control system like Git. Install PromptFlow, a powerful tool for designing, testing, and deploying prompt-based applications. This tool will streamline your workflow, allowing you to create, test, and optimize prompts with ease. Next, integrate LangChain, a versatile framework designed to facilitate the use of language models in your applications. LangChain provides modules for chaining together various components, such as prompts, retrieval mechanisms, and post-processing steps, enabling you to build complex RAG systems efficiently. Configure your environment to support these tools, ensuring you have the necessary libraries and frameworks installed, and set up a virtual environment to manage dependencies. By meticulously setting up your development environment with PromptFlow and LangChain, you lay a solid foundation for creating a robust, scalable, and efficient RAG application.
Start with a Prompt Flow project.
pf flow init –flow rag-langchain-pf –type chat
As soon as you run this you will able to see a folder with below files.
Design your prompt flow using PromptFlow’s visual interface: Designing your prompt flow using PromptFlow’s visual interface is a crucial step in developing an intuitive and effective Retrieval-Augmented Generation (RAG) application. Begin by familiarizing yourself with PromptFlow’s drag-and-drop interface, which allows you to visually map out the sequence of prompts and actions your application will execute. Start by defining the initial input prompts that will trigger the retrieval of relevant documents. Use the visual interface to connect these prompts to subsequent actions, such as querying your document database or calling external APIs for additional data.
Next, incorporate conditional logic to handle various user inputs and scenarios, ensuring that your prompt flow can adapt dynamically to different contexts. Leverage PromptFlow’s built-in modules to integrate language model responses, enabling seamless transitions between retrieving information and generating human-like text. As you design the flow, make use of visual debugging tools to test each step, ensuring that the prompts and actions work together harmoniously. This iterative process allows you to refine and optimize the prompt flow, making it more efficient and responsive to user needs. By taking advantage of PromptFlow’s visual interface, you can create a clear, logical, and efficient prompt flow that enhances the overall performance and user experience of your RAG application.
First install the visual studio extension for Prompt Flow.
Once you installed the PromptFlow Extension, you will be able to see the Flow you just created using PF Init. If you open the Flow you will see below
Next you create a custom connection which will store Azure OpenAI/ACS keys and endpoints. Create a file called langchain_pf_connection.yaml. Paste the below details there.
$schema: https://azuremlschemas.azureedge.net/promptflow/latest/CustomConnection.schema.json
name: langchain_pf_connection
type: custom
configs:
test_key: test_value
secrets: # required
AZURE_OPENAI_ENDPOINT: https://XXXX.openai.azure.com/
AZURE_OPENAI_GPT_DEPLOYMENT: gpt-4o
AZURE_OPENAI_API_KEY: XXXX
ACS_ENDPOINT: https://search-XXXX.search.windows.net
ACS_KEY: XXXX
Now run the below command to create the custom connection in the terminal.
pf connection create -f langchain_pf_connection.yaml
Once you create the connection next step is the create of the flow. Edit the flow.dag.yaml and paste the below code.
id: template_chat_flow
name: Template Chat Flow
inputs:
chat_history:
type: list
is_chat_input: false
is_chat_history: true
question:
type: string
is_chat_input: true
outputs:
answer:
type: string
reference: ${code.output}
is_chat_output: true
nodes:
– name: code
type: python
source:
type: code
path: code.py
inputs:
chat_history: ${inputs.chat_history}
input1: ${inputs.question}
my_conn: langchain_pf_connection
use_variants: false
node_variants: {}
environment:
python_requirements_txt: requirements.txt
Implement LangChain components for enhanced LLM interactions:
Implementing LangChain components for enhanced LLM interactions is a key aspect of building a sophisticated Retrieval-Augmented Generation (RAG) application. LangChain offers a modular approach to integrating language models, enabling you to construct complex workflows that leverage the power of large language models (LLMs). Start by identifying the core components you need, such as input processing, retrieval mechanisms, and output generation.
Begin with the input processing component to handle and preprocess user queries. This might involve tokenization, normalization, and contextual understanding to ensure the query is suitable for retrieval. Next, implement the retrieval component, which connects to your document database or API endpoints to fetch relevant information. LangChain provides tools to streamline this process, such as vector stores for efficient similarity searches and retrievers that can interface with various data sources.
Once the relevant documents are retrieved, integrate the LLM component to generate responses. Use LangChain’s chaining capabilities to combine the retrieved information with prompts that guide the LLM in generating coherent and contextually appropriate outputs. You can also implement post-processing steps to refine the output, ensuring it meets the desired accuracy and relevance criteria.
Additionally, consider incorporating LangChain’s memory components to maintain context across interactions, enhancing the continuity and relevance of the responses. By carefully implementing these components, you can create a robust system that leverages the strengths of LLMs to deliver accurate, context-aware, and high-quality interactions within your RAG application.
Create a file called code.py. Here is the code of the same.
#from dotenv import load_dotenv
#load_dotenv(‘azure.env’)
from promptflow.core import tool
from langchain_core.messages import AIMessage, HumanMessage
from promptflow.connections import CustomConnection
import os
@tool
def my_python_tool(input1: str, chat_history: list, my_conn: CustomConnection) -> str:
connection_dict = dict(my_conn.secrets)
for key, value in connection_dict.items():
os.environ[key] = value
print(connection_dict)
from chain import rag_chain
chat_history_revised = []
for item in chat_history:
chat_history_revised.append(HumanMessage(item[‘inputs’][‘question’]))
chat_history_revised.append(AIMessage(item[‘outputs’][‘answer’]))
return rag_chain.invoke({“input”: input1, “chat_history”: chat_history_revised})[‘answer’]
# type: ignore
Develop the application logic and user interface
First create a requirements.txt file. Here we can create a separate virtual environment and run the pip install -r requirements.txt.
langchain==0.2.6
langchain_openai
python-dotenv
Next step is creating a the Langchain LLM Chain using the new Langchain expression language. For this create a file called chain.py
import bs4
from langchain_community.document_loaders import WebBaseLoader
from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import RunnablePassthrough
from langchain_text_splitters import RecursiveCharacterTextSplitter
from langchain_openai import AzureChatOpenAI
from langchain_community.vectorstores.azuresearch import AzureSearch
from langchain_openai import AzureOpenAIEmbeddings, AzureChatOpenAI
from langchain_core.messages.human import HumanMessage
import os
embeddings = AzureOpenAIEmbeddings(
api_key=os.getenv(“AZURE_OPENAI_API_KEY”),
openai_api_version=”2024-03-01-preview”,
azure_endpoint=os.getenv(“AZURE_OPENAI_ENDPOINT”),
azure_deployment=”text-embedding-ada-002″
)
llm = AzureChatOpenAI(api_key = os.environ[“AZURE_OPENAI_API_KEY”],
api_version=”2024-06-01″,
azure_endpoint = os.environ[“AZURE_OPENAI_ENDPOINT”],
azure_deployment= “gpt-4o”,
streaming=False)
index_name: str = “llm-powered-auto-agent”
vector_store: AzureSearch = AzureSearch(
azure_search_endpoint=os.environ[“ACS_ENDPOINT”],
azure_search_key=os.environ[“ACS_KEY”],
index_name=index_name,
embedding_function=embeddings.embed_query,
)
# Retrieve and generate using the relevant snippets of the blog.
retriever = vector_store.as_retriever()
def format_docs(docs):
return “nn”.join(doc.page_content for doc in docs)
from langchain.chains import create_history_aware_retriever
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
contextualize_q_system_prompt = “””Given a chat history and the latest user question
which might reference context in the chat history, formulate a standalone question
which can be understood without the chat history. Do NOT answer the question,
just reformulate it if needed and otherwise return it as is.”””
contextualize_q_prompt = ChatPromptTemplate.from_messages(
[
(“system”, contextualize_q_system_prompt),
MessagesPlaceholder(“chat_history”),
(“human”, “{input}”),
]
)
history_aware_retriever = create_history_aware_retriever(
llm, retriever, contextualize_q_prompt
)
from langchain.chains import create_retrieval_chain
from langchain.chains.combine_documents import create_stuff_documents_chain
qa_system_prompt = “””You are an assistant for question-answering tasks.
Use the following pieces of retrieved context to answer the question.
If you don’t know the answer, just say that you don’t know.
Use three sentences maximum and keep the answer concise.
{context}”””
qa_prompt = ChatPromptTemplate.from_messages(
[
(“system”, qa_system_prompt),
MessagesPlaceholder(“chat_history”),
(“human”, “{input}”),
]
)
question_answer_chain = create_stuff_documents_chain(llm, qa_prompt)
rag_chain = create_retrieval_chain(history_aware_retriever, question_answer_chain)
Test the flow: Once you are done with the above steps next step is to test the flow. You can do this using two way. One way is from VS Code PromptFlow extension. Here you first open the flow . As shown below. Click on test button.
Alternatively you can also use command line.
pf flow test –flow ..rag-langchain-pf –interactive
Output:
Implementing Observability and Trackability
Observability and trackability are crucial for maintaining and improving GenAI applications:
Implement logging throughout your application, capturing:
Inputs
Outputs
Intermediate steps
Azure Machine Learning provides the tracing capability for logging and managing your LLM applications tests and evaluations, while debugging and observing by drilling down the trace view.
The tracing any application feature today is implemented in the prompt flow open-source package, to enable user to trace LLM call or function, and LLM frameworks like LangChain and AutoGen, regardless of which framework you use, following OpenTelemetry specification. When you run the PromptFlow locally it automatically starts the pf service to trace under
http://127.0.0.1:23333/v1.0/ui/traces/?#collection=rag-langchain-pf
2. Use distributed tracing to track requests across different components of your system:
3. Set up metrics collection for key performance indicators (KPIs)
Deployment as Online Managed Endpoint:
A flow can be deployed to multiple platforms, such as a local development service, Docker container, Kubernetes cluster, etc.
Deploy into Azure App Service: If you want to deploy into Azure App Service. Here are the steps needs to be performed explained in the official blog.
If you want to deploy into Azure Machine Learning as Online Managed Endpoint, Here are the steps. You need to create below files.
First you want to register this a Model in Model Registry in AML. For that lets create a model.yaml.
$schema: https://azuremlschemas.azureedge.net/latest/model.schema.json
name: langchain-pf-model
path: .
description: register langchain pf folder as a custom model
properties:
is-promptflow : true
azureml.promptflow.mode : chat
azureml.promptflow.chat_history : chat_history
azureml.promptflow.chat_output : answer
azureml.promptflow.chat_input : question
azureml.promptflow.dag_file : flow.dag.yaml
azureml.promptflow.source_flow_id : langchain-pf
Next we register this using the above file. Before that make sure you are already logged into Azure and Set the default workspace.
az account set –subscription <subscription ID>
az configure –defaults workspace=<Azure Machine Learning workspace name> group=<resource group>
Use az ml model create –file model.yaml to register the model to your workspace.
name: langchain-pf-endpoint
description: basic chat endpoint deployed using CLI
auth_mode: key
properties:
# this property only works for system-assigned identity.
# if the deploy user has access to connection secrets,
# the endpoint system-assigned identity will be auto-assigned connection secrets reader role as well
enforce_access_to_default_secret_stores: enabled
name: blue
endpoint_name: langchain-pf-endpoint
model: azureml:langchain-pf-model:1
# You can also specify model files path inline
# path: examples/flows/chat/basic-chat
environment:
build:
path: image_build_with_reqirements
dockerfile_path: Dockerfile
inference_config:
liveness_route:
path: /health
port: 8080
readiness_route:
path: /health
port: 8080
scoring_route:
path: /score
port: 8080
instance_type: Standard_E16s_v3
instance_count: 1
request_settings:
request_timeout_ms: 300000
environment_variables:
PROMPTFLOW_CONNECTION_PROVIDER: azureml://subscriptions/<subscription_id>/resourceGroups/<resoure-name>/providers/Microsoft.MachineLearningServices/workspaces/<workspace-name>
APPLICATIONINSIGHTS_CONNECTION_STRING: <connection_string>
Use az ml online-deployment create –file blue-deployment.yml –all-traffic
Model Monitoring and Debugging Strategies
Effective monitoring and debugging are essential for maintaining the quality of your GenAI application:
Implement model performance monitoring to track:
Accuracy
Latency
Other relevant metrics
You will be able to track the metrics with AML endpoint.
Set up alerts for anomalies or performance degradation
Use PromptFlow’s built-in debugging tools to inspect and troubleshoot prompt executions. You can see individual prompts for check the quality and debug.
Implement A/B testing capabilities to compare:
Different prompt strategies
Model versions
you can run two different blue-green deployment and run the A/B testing with the same approach.
Ensuring Scalability in Enterprise Environments
To meet the demands of enterprise users, your GenAI application must be scalable:
Design your application with a microservices architecture for better scalability
Implement autoscaling using container orchestration platforms like Kubernetes
Optimize database and caching strategies for high-volume data processing
Consider using serverless technologies for cost-effective scaling of certain components
Conclusion
Building end-to-end enterprise GenAI applications with PromptFlow and LangChain offers a powerful approach to creating robust, observable, and scalable AI solutions. By focusing on observability, trackability, model monitoring, debugging, and autoscaling, you can create applications that meet the demanding requirements of enterprise environments.
As you embark on your GenAI development journey, remember that the field is rapidly evolving. Stay updated with the latest developments in PromptFlow, LangChain, and the broader AI landscape to ensure your applications remain at the cutting edge of technology.
References:
1. https://github.com/microsoft/promptflow/tree/main/examples
3. https://microsoft.github.io/promptflow/cloud/azureai/deploy-to-azure-appservice.html
Microsoft Tech Community – Latest Blogs –Read More
MATLAB Web App: Multiple Page survey
I’m trying to make a GUI that takes in answers from multiple study participants. In the GUI, I want the participants to rate multiple audio files based on different questions. I’ve built a basic GUI in App Designer for rating one audio file (with placeholder question titles) and I want it to go to another page with the same structure for rating the subsequent audio files.
I’m a newcomer to MATLAB and needed some pointers on how I could do this. Would I need to create multiple .mlapp for all the audio files and somehow connect them as web apps? I don’t have any experience with working with web servers and databases, so I’m not sure how I would go about in deploying this app for collecting data. Ideally, I would like participants to answer the questions for all audio files (while keeping them anonymous) and save all their responses to a server in form of an excel sheet.
I’ve attached the basic look of the GUI (only for 1 audio) as a screenshot below.I’m trying to make a GUI that takes in answers from multiple study participants. In the GUI, I want the participants to rate multiple audio files based on different questions. I’ve built a basic GUI in App Designer for rating one audio file (with placeholder question titles) and I want it to go to another page with the same structure for rating the subsequent audio files.
I’m a newcomer to MATLAB and needed some pointers on how I could do this. Would I need to create multiple .mlapp for all the audio files and somehow connect them as web apps? I don’t have any experience with working with web servers and databases, so I’m not sure how I would go about in deploying this app for collecting data. Ideally, I would like participants to answer the questions for all audio files (while keeping them anonymous) and save all their responses to a server in form of an excel sheet.
I’ve attached the basic look of the GUI (only for 1 audio) as a screenshot below. I’m trying to make a GUI that takes in answers from multiple study participants. In the GUI, I want the participants to rate multiple audio files based on different questions. I’ve built a basic GUI in App Designer for rating one audio file (with placeholder question titles) and I want it to go to another page with the same structure for rating the subsequent audio files.
I’m a newcomer to MATLAB and needed some pointers on how I could do this. Would I need to create multiple .mlapp for all the audio files and somehow connect them as web apps? I don’t have any experience with working with web servers and databases, so I’m not sure how I would go about in deploying this app for collecting data. Ideally, I would like participants to answer the questions for all audio files (while keeping them anonymous) and save all their responses to a server in form of an excel sheet.
I’ve attached the basic look of the GUI (only for 1 audio) as a screenshot below. database, appdesigner MATLAB Answers — New Questions