Month: October 2024
Gradient Descent optimization in an electrical circuit or transmission line
Is there any sample work of gradient descent based optimization of an electrical circuit or transmission line parameters done using MATLAB? I am trying to optimize the parameters of a nonlinear transmission line using gradient descent algorithm. I intend to use Matlab and LTspice. Any help regarding this problem would be greatly appreciated. Thanks.Is there any sample work of gradient descent based optimization of an electrical circuit or transmission line parameters done using MATLAB? I am trying to optimize the parameters of a nonlinear transmission line using gradient descent algorithm. I intend to use Matlab and LTspice. Any help regarding this problem would be greatly appreciated. Thanks. Is there any sample work of gradient descent based optimization of an electrical circuit or transmission line parameters done using MATLAB? I am trying to optimize the parameters of a nonlinear transmission line using gradient descent algorithm. I intend to use Matlab and LTspice. Any help regarding this problem would be greatly appreciated. Thanks. gradient descent, optimization, circuits, transmission line, matlab, ltspice, ads MATLAB Answers — New Questions
MS Forms not saving data after time is up
I created a MS Form with a 45 minute timer a while ago and it was working fine. Since last week I noticed the forms that were closed due to time up were not saving at all. Any suggestions/advice/update?
I created a MS Form with a 45 minute timer a while ago and it was working fine. Since last week I noticed the forms that were closed due to time up were not saving at all. Any suggestions/advice/update? Read More
SQL MI firewall
During SQL MI deployment, a built-in firewall is deployed along with a dedicated virtual network (VNET). We can choose to access SQL MI using either a public URL or a private endpoint. Does this built-in firewall provide sufficient protection for SQL MI, or do we need to implement an additional firewall?
During SQL MI deployment, a built-in firewall is deployed along with a dedicated virtual network (VNET). We can choose to access SQL MI using either a public URL or a private endpoint. Does this built-in firewall provide sufficient protection for SQL MI, or do we need to implement an additional firewall? Read More
Set Default Search to By Columns in Excel
I have several Excel spreadsheets with over 100,000 lines. I would like to set the default Search parameter for each to search By Column instead of By Row. Preferably for all Excel documents, but I can live with this default for my most highly used large documents. There appears to be a way to do this in older versions of Excel, but the directions provided do not work with my current version of Excel. I am using Microsoft 365 apps for enterprise. Any guidance would be appreciated.
I have several Excel spreadsheets with over 100,000 lines. I would like to set the default Search parameter for each to search By Column instead of By Row. Preferably for all Excel documents, but I can live with this default for my most highly used large documents. There appears to be a way to do this in older versions of Excel, but the directions provided do not work with my current version of Excel. I am using Microsoft 365 apps for enterprise. Any guidance would be appreciated. Read More
Bookings sending 2 different meeting links
We are experiencing an issue in which Bookings sends a different link to the customer and to our team member for the same meeting leaving both parties waiting in separate meetings. This is happening more often now and is getting frustrating. Someone please help! Costumers will typically book through this our Bookings page. They will receive on email on their end with the booking confirmation with a link to a Teams meeting, they will then receive it again in a reminder email. The team member will also receive an email when the appointment is booked. Both sides of the booking emails will look like the image below. Along with the email, Bookings automatically populates an event in our Outlook calendar which we click to join the Teams meeting. I have checked and both the email that I receive and the link in my Calendar are identical. So I believe the customer is getting a different link than what is sent to me. This happens every so often so we have been calling the number linked to the booking if no one shows up and often times customers are in their own Teams meeting waiting for us. Is this happening to anyone else? If so, how do you fix it? Read More
Validating that data was entered into a text Box
I have a Signin form (frmSignIn) with two Text Boxes asking for a UserName (txtName) and Password (txtPassword). The form also has two command buttons Enter (cmdEnter) and Add User (cmdAddUser).
I would like to validate that data has been entered into the text boxes before processing the form.
I have tried the following VBA Code in the BeforeUpdate event for the text Box.
Option Compare Database
Option Explicit
Private Sub txtName_BeforeUpdate(Cancel As Integer)
If txtName = “” Or IsNull(txtValue) Then
MsgBox “You are a Dummy”
Cancel = True
End If
End Sub
This only works if I enter a value in the txtName Box and then backspace to erase the value.
What am I doing wrong?
I have a Signin form (frmSignIn) with two Text Boxes asking for a UserName (txtName) and Password (txtPassword). The form also has two command buttons Enter (cmdEnter) and Add User (cmdAddUser). I would like to validate that data has been entered into the text boxes before processing the form. I have tried the following VBA Code in the BeforeUpdate event for the text Box. Option Compare DatabaseOption ExplicitPrivate Sub txtName_BeforeUpdate(Cancel As Integer) If txtName = “” Or IsNull(txtValue) Then MsgBox “You are a Dummy” Cancel = True End IfEnd Sub This only works if I enter a value in the txtName Box and then backspace to erase the value. What am I doing wrong? Read More
Incredibly Frustrated
Today I was due for an interview but because of issues with Teams, I was unable to connect with the host.
I needed to join the meeting as a guest but Microsoft teams would continue to keep logging me in no matter what. I was not given the option to join as guest.
Why did this happen? What can be done in the future?
Today I was due for an interview but because of issues with Teams, I was unable to connect with the host. I needed to join the meeting as a guest but Microsoft teams would continue to keep logging me in no matter what. I was not given the option to join as guest. Why did this happen? What can be done in the future? Read More
Need help with troubleshooting connection to receive connector
Hi Folks,
It’s been over 20 years since I’ve really had to dive into the depths of exchange, and sad to say, I have forgotten most of what I knew. Please bear with me as I fumble through this…
Scenario:
I am currently trying to get a hosted accounting software (QB) to connect to on-prem exchange 2016 via webmail. QB will ask for password when using configs below. Change anything and it states it cant connect.
Configs:
QB is configured to use webmail to server, mail. domain name, port 25. I have created a receive connector to listen for mail from the IP of the hosted QB, over port 25.
Authentication on connector is TLS and Basic Auth. Permission is, exchange users. Protocol logging is verbose. OWA works just fine.
I cant tell if QB truly connects or not. How can I troubleshoot this scenario, and verify QB actually connects to my receiver? I have talked w/ QB T2 tech support and they could not resolve. Sent me link to developer site and said good luck… Not a developer, so not sure what to do with that.
Any help would be greatly appreciated!
Hi Folks,It’s been over 20 years since I’ve really had to dive into the depths of exchange, and sad to say, I have forgotten most of what I knew. Please bear with me as I fumble through this… Scenario:I am currently trying to get a hosted accounting software (QB) to connect to on-prem exchange 2016 via webmail. QB will ask for password when using configs below. Change anything and it states it cant connect. Configs:QB is configured to use webmail to server, mail. domain name, port 25. I have created a receive connector to listen for mail from the IP of the hosted QB, over port 25.Authentication on connector is TLS and Basic Auth. Permission is, exchange users. Protocol logging is verbose. OWA works just fine. I cant tell if QB truly connects or not. How can I troubleshoot this scenario, and verify QB actually connects to my receiver? I have talked w/ QB T2 tech support and they could not resolve. Sent me link to developer site and said good luck… Not a developer, so not sure what to do with that. Any help would be greatly appreciated! Read More
VoiceRAG: An App Pattern for RAG + Voice Using Azure AI Search and the GPT-4o Realtime API for Audio
The new Azure OpenAI gpt-4o-realtime-preview model opens the door for even more natural application user interfaces with its speech-to-speech capability.
This new voice-based interface also brings an interesting new challenge with it: how do you implement retrieval-augmented generation (RAG), the prevailing pattern for combining language models with your own data, in a system that uses audio for input and output?
In this blog post we present a simple architecture for voice-based generative AI applications that enables RAG on top of the real-time audio API with full-duplex audio streaming from client devices, while securely handling access to both model and retrieval system.
Architecting for real-time voice + RAG
Supporting RAG workflows
We use two key building blocks to make voice work with RAG:
Function calling: the gpt-4o-realtime-preview model supports function calling, allowing us to include “tools” for searching and grounding in the session configuration. The model listens to audio input and directly invokes these tools with parameters that describe what it’s looking to retrieve from the knowledge base.
Real-time middle tier: we need to separate what needs to happen in the client from what cannot be done client-side. The full-duplex, real-time audio content needs to go to/from the client device’s speakers/microphone. On the other hand, the model configuration (system message, max tokens, temperature, etc.) and access to the knowledge base for RAG needs to be handled on the server, since we don’t want the client to have credentials for these resources, and don’t want to require the client to have network line-of-sight to these components. To accomplish this, we introduce a middle tier component that proxies audio traffic, while keeping aspects such as model configuration and function calling entirely on the backend.
These two building blocks work in coordination: the real-time API knows not to move a conversation forward if there are outstanding function calls. When the model needs information from the knowledge base to respond to input, it emits a “search” function call. We turn that function call into an Azure AI Search “hybrid” query (vector + hybrid + reranking), get the content passages that best relate to what the model needs to know, and send it back to the model as the function’s output. Once the model sees that output, it responds via the audio channel, moving the conversation forward.
A critical element in this picture is fast and accurate retrieval. The search call happens between the user turn and the model response in the audio channel, a latency-sensitive point in time. Azure AI Search is the perfect fit for this, with its low latency for vector and hybrid queries and its support for semantic reranking to maximize relevance of responses.
Generating Grounded Responses
Using function calling addresses the question of how to coordinate search queries against the knowledge base, but this inversion of control creates a new problem: we don’t know which of the passages retrieved from the knowledge base were used to ground each response. Typical RAG applications that interact with the model API in terms of text we can ask in the prompt to produce citations with special notation that we can present in the UX appropriately, but when the model is generating audio, we don’t want it to say file names or URLs out loud. Since it’s critical for generative AI applications to be transparent about what grounding data was used to respond to any given input, we need a different mechanism for identifying and showing citations in the user experience.
We also use function calling to accomplish this. We introduce a second tool called “report_grounding”, and as part of the system prompt we include instructions along these lines:
Use the following step-by-step instructions to respond with short and concise answers using a knowledge base:
Step 1 – Always use the ‘search’ tool to check the knowledge base before answering a question.
Step 2 – Always use the ‘report_grounding’ tool to report the source of information from the knowledge base.
Step 3 – Produce an answer that’s as short as possible. If the answer isn’t in the knowledge base, say you don’t know.
We experimented with different ways to formulate this prompt and found that explicitly listing this as a step-by-step process is particularly effective.
With these two tools in place, we now have a system that flows audio to the model, enables the model to call back to app logic in the backend both for searching and for telling us which pieces of grounding data was used, and then flows audio back to the client along with extra messages to let the client know about the grounding information (you can see this in the UI as citations to documents that show up as the answer is spoken).
Using any Real-Time API-enabled client
Note that the middle tier completely suppresses tools-related interactions and overrides system configuration options but otherwise maintains the same protocol. This means that any client that works directly against the Azure OpenAI API will “just work” against the real-time middle tier, since the RAG process is entirely encapsulated on the backend.
Creating secure generative AI apps
We’re keeping all configuration elements (system prompt, max tokens, etc.) and all credentials (to access Azure OpenAI, Azure AI Search, etc.) in the backend, securely separated from clients. Furthermore, Azure OpenAI and Azure AI Search include extensive security capabilities to further secure the backend, including network isolation to make the API endpoints of both models and search indexes not reachable through the internet, Entra ID to avoid keys for authentication across services, and options for multiple layers of encryption for the indexed content.
Try it today
The code and data for everything discussed in this blog post is available in this GitHub repo: Azure-Samples/aisearch-openai-rag-audio. You can use it as-is, or you can easily change the data to your own and talk to your data.
The code in the repo above and the description in this blog post in more of a pattern than a specific solution. You’ll need to experiment to get the prompts right, maybe expand the RAG workflow, and certainly assess it for security and AI safety.
To learn more about the Azure OpenAI gpt-4o-realtime-preview model and real-time API you can go here. For Azure AI Search you’ll find plenty of resources here, and the documentation here.
Looking forward to seeing new “talk to your data” scenarios!
Microsoft Tech Community – Latest Blogs –Read More
Uploading Files and Sending data using HttpClient: A Simple Guide
HttpClient stands out for its flexibility. It can send HTTP requests asynchronously, manage headers, handle authentication, and work with cookies, all while giving you detailed control over request and response formats. Its reusable nature makes it an essential tool for making network requests in modern apps, whether you’re connecting to web APIs, microservices, or handling file uploads and downloads.
What you can achieve with HttpClient:
Perform all types of HTTP operations (GET, POST, PUT, DELETE, etc.).
Send and receive complex data such as JSON or multipart form data.
Receiving responses from web services or APIs.
Handling asynchronous operations for efficient request management.
In this article, we will walk through creating a simple console application in C# that allows users to input basic student details, Roll Number, Name, Age and upload a certificate. The certificate file can be in any format (pdf, doc etc.). We will use HttpClient to send the data to a web server.
Step 1: Create a new Console Application
Open your terminal or command prompt and run the following command to create a new console application:
dotnet new console -n StudentUploadApp
Navigate into the project directory:
cd StudentUploadApp
Step 2: Add Required Packages
You will need the System.Net.Http package, which is typically included by default in .NET SDK. If you need to install it, use:
dotnet add package System.Net.Http
Step 3: Write the Application Code
Open the Program.cs file and replace the existing code with the following:
using System;
using System.IO;
using System.Net.Http;
using System.Net.Http.Headers;
using System.Threading.Tasks;
class Program
{
private static readonly HttpClient client = new HttpClient();
static async Task Main(string[] args)
{
Console.Write(“Enter Student Roll Number: “);
string rollNumber = Console.ReadLine();
Console.Write(“Enter Student Name: “);
string studentName = Console.ReadLine();
Console.Write(“Enter Student Age: “);
string studentAge = Console.ReadLine();
Console.Write(“Enter path to Birth Certificate PDF: “);
string pdfPath = Console.ReadLine();
if (File.Exists(pdfPath))
{
await UploadStudentData(rollNumber, studentName, studentAge, pdfPath);
}
else
{
Console.WriteLine(“File not found. Please check the path and try again.”);
}
}
private static async Task UploadStudentData(string rollNumber, string name, string age, string pdfPath)
{
using (var form = new MultipartFormDataContent())
{
form.Add(new StringContent(rollNumber), “rollNumber”);
form.Add(new StringContent(name), “name”);
form.Add(new StringContent(age), “age”);
var pdfContent = new ByteArrayContent(await File.ReadAllBytesAsync(pdfPath));
pdfContent.Headers.ContentType = MediaTypeHeaderValue.Parse(“application/pdf”);
form.Add(pdfContent, “birthCertificate”, Path.GetFileName(pdfPath));
// Replace with your actual API endpoint
var response = await client.PostAsync(“https://yourapi.com/upload”, form);
if (response.IsSuccessStatusCode)
{
Console.WriteLine(“Student data uploaded successfully!”);
}
else
{
Console.WriteLine($”Failed to upload data: {response.StatusCode}”);
}
}
}
}
Please note that you need to replace this URL “https://yourapi.com/upload” with your actual API endpoint where the data should be sent. Ensure the server can handle multipart form data.
Step 4: Modify your API Controller
Create a new controller and replace its content with the following code:
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Mvc;
using System.IO;
using System.Threading.Tasks;
namespace StudentUploadApi.Controllers
{
[ApiController]
[Route(“api/[controller]”)]
public class StudentController : ControllerBase
{
[HttpPost(“upload”)]
public async Task<IActionResult> UploadStudentData(
[FromForm] string rollNumber,
[FromForm] string name,
[FromForm] string age,
[FromForm] IFormFile birthCertificate)
{
if (birthCertificate == null || birthCertificate.Length == 0)
{
return BadRequest(“No file uploaded.”);
}
var filePath = Path.Combine(“UploadedFiles”, birthCertificate.FileName);
// Ensure the directory exists
Directory.CreateDirectory(“UploadedFiles”);
// Save the uploaded file to the specified path
using (var stream = new FileStream(filePath, FileMode.Create))
{
await birthCertificate.CopyToAsync(stream);
}
// Here, you can add logic to save the student data to a database or perform other operations
return Ok(new { RollNumber = rollNumber, Name = name, Age = age, FilePath = filePath });
}
}
}
Step 5: Now, your API is ready to accept POST requests. Run the API.
You can run your API with the following command:
dotnet run
Step 6: Finally, run your console application.
You will be prompted to enter the student’s Roll Number, Name, Age, and the path to their certificate file. Make sure the file exists at the specified path.
Now, when you run your console application and upload a student’s details along with the certificate file, the API will receive the data and save the file to UploadedFiles directory in your API project. You can enhance the API by adding validations, error handling, and database integration as needed.
Finally, you should see a confirmation message in the console window as below.
Conclusion:
In this article, we explored the power of HttpClient by building a simple console application that uploads student details along with a file to a web API. HttpClient is a powerful class in the .NET framework enabling developers to easily perform HTTP operations such as sending and receiving data, managing headers, and handling multipart form data. Whether you’re building client-side apps or interacting with external APIs, HttpClient simplifies communication. With the right setup, you can handle complex file uploads and data transmissions effortlessly in your applications.
Microsoft Tech Community – Latest Blogs –Read More
An update is available for the Microsoft Malware Protection Engine to control updates to the last access time stamp on scanned files – Microsoft Support
Describes a software update that is available for Microsoft Malware Protection Engine to support disabling of updates to the last access time stamp on scanned files.
An update is available for Windows Defender on Windows Vista – Microsoft Support
Describe a problem in the Windows Defender malware sample submission functionality that prevents Microsoft from receiving requested software samples from Windows Defender users who are running Windows Vista. Describes an update that fixes this issue.
propagateOrbit function hangs for no obvious reason
When the propagateOrbit function is fed certain TLEs, it simply stalls instead of throwing an exception. Since the propagateOrbit function is compiled, there is no way to trace and debug the hangup. When I press the "Pause" button, it greys out and says "pausing". If I press "Stop", I get the following message:
Operation terminated by user during matlabshared.orbit.internal.SGP4.propagate
For example, this TLE appears to be toxic:
1 52643U 00000A 22181.91667824 -.00046528 00000-0 32759-0 0 10
2 52643 53.2133 206.5805 0002347 15.7379 205.9837 15.73290591 1300
Here is a code example:
formatTime = ‘uuuu:DDD:HH:mm:ss.SSS’;
start_time = 2022:182:11:08:05.800′;
dt_start_time = datetime(start_time,’InputFormat’, formatTime);
tlestruct = tleread(‘tle_52643.tle’);
[r,v] = propagateOrbit(dt_start_time, tlestruct);
In some cases, propagateOrbit() is unable to resolve an orbit. This is to be expected; however, instead of throwing an exception, the function outputs a set of matrices containing complex numbers.If you then try to do a coordinate transformation on the r and v matrices using eci2ecef(), the code blows up.
Here is an example of a satellite TLE that does not resolve:
1 87954U 00000A 11327.05272112 .00003699 00000-0 57015-0 0 10
2 87954 98.4963 251.3127 0116186 87.4088 331.4134 14.69587133 3680
[r,v] = propagateOrbit(dt_start_time, tlestruct);
r = 1.0e+24 *
-1.2748 – 3.5603i
-4.3786 – 5.0800i
7.0058 – 3.8228i
On a related note, the tleread function will fail if the BSTAR term’s minus sign is not exactly in column 54. If the BSTAR mantissa is less than five digits long and the minus sign gets displaced to the right, tleread() is unable to generate a TLE. Some sites like Celestrak are more diligent about adding leading zeros to the BSTAR term, but others are not and can lead you to trouble.
SUGGESTION: Make tleread() more user-friendly?When the propagateOrbit function is fed certain TLEs, it simply stalls instead of throwing an exception. Since the propagateOrbit function is compiled, there is no way to trace and debug the hangup. When I press the "Pause" button, it greys out and says "pausing". If I press "Stop", I get the following message:
Operation terminated by user during matlabshared.orbit.internal.SGP4.propagate
For example, this TLE appears to be toxic:
1 52643U 00000A 22181.91667824 -.00046528 00000-0 32759-0 0 10
2 52643 53.2133 206.5805 0002347 15.7379 205.9837 15.73290591 1300
Here is a code example:
formatTime = ‘uuuu:DDD:HH:mm:ss.SSS’;
start_time = 2022:182:11:08:05.800′;
dt_start_time = datetime(start_time,’InputFormat’, formatTime);
tlestruct = tleread(‘tle_52643.tle’);
[r,v] = propagateOrbit(dt_start_time, tlestruct);
In some cases, propagateOrbit() is unable to resolve an orbit. This is to be expected; however, instead of throwing an exception, the function outputs a set of matrices containing complex numbers.If you then try to do a coordinate transformation on the r and v matrices using eci2ecef(), the code blows up.
Here is an example of a satellite TLE that does not resolve:
1 87954U 00000A 11327.05272112 .00003699 00000-0 57015-0 0 10
2 87954 98.4963 251.3127 0116186 87.4088 331.4134 14.69587133 3680
[r,v] = propagateOrbit(dt_start_time, tlestruct);
r = 1.0e+24 *
-1.2748 – 3.5603i
-4.3786 – 5.0800i
7.0058 – 3.8228i
On a related note, the tleread function will fail if the BSTAR term’s minus sign is not exactly in column 54. If the BSTAR mantissa is less than five digits long and the minus sign gets displaced to the right, tleread() is unable to generate a TLE. Some sites like Celestrak are more diligent about adding leading zeros to the BSTAR term, but others are not and can lead you to trouble.
SUGGESTION: Make tleread() more user-friendly? When the propagateOrbit function is fed certain TLEs, it simply stalls instead of throwing an exception. Since the propagateOrbit function is compiled, there is no way to trace and debug the hangup. When I press the "Pause" button, it greys out and says "pausing". If I press "Stop", I get the following message:
Operation terminated by user during matlabshared.orbit.internal.SGP4.propagate
For example, this TLE appears to be toxic:
1 52643U 00000A 22181.91667824 -.00046528 00000-0 32759-0 0 10
2 52643 53.2133 206.5805 0002347 15.7379 205.9837 15.73290591 1300
Here is a code example:
formatTime = ‘uuuu:DDD:HH:mm:ss.SSS’;
start_time = 2022:182:11:08:05.800′;
dt_start_time = datetime(start_time,’InputFormat’, formatTime);
tlestruct = tleread(‘tle_52643.tle’);
[r,v] = propagateOrbit(dt_start_time, tlestruct);
In some cases, propagateOrbit() is unable to resolve an orbit. This is to be expected; however, instead of throwing an exception, the function outputs a set of matrices containing complex numbers.If you then try to do a coordinate transformation on the r and v matrices using eci2ecef(), the code blows up.
Here is an example of a satellite TLE that does not resolve:
1 87954U 00000A 11327.05272112 .00003699 00000-0 57015-0 0 10
2 87954 98.4963 251.3127 0116186 87.4088 331.4134 14.69587133 3680
[r,v] = propagateOrbit(dt_start_time, tlestruct);
r = 1.0e+24 *
-1.2748 – 3.5603i
-4.3786 – 5.0800i
7.0058 – 3.8228i
On a related note, the tleread function will fail if the BSTAR term’s minus sign is not exactly in column 54. If the BSTAR mantissa is less than five digits long and the minus sign gets displaced to the right, tleread() is unable to generate a TLE. Some sites like Celestrak are more diligent about adding leading zeros to the BSTAR term, but others are not and can lead you to trouble.
SUGGESTION: Make tleread() more user-friendly? tle, propagateorbit, two-line elements MATLAB Answers — New Questions
Built in function cd not being found when running a custom function
Hello, I am having an issue where cd is not being found when I run a custom script. The relevant script is as follows, [tmp] being a placeholder that removes identifying information. cvfn is the name of a file that is located in basepath.
function horm_symptom(cvfn)
close all;
basepath={‘C:\Users\[tmp]_WorkingFolder’;
‘C:\Users\[tmp]_WorkingFolder’};
if exist(basepath{1},’dir’), basepath=basepath{1};
elseif exist(basepath{2},’dir’), basepath=basepath{2};
else, error(‘basepath does not exist’);
end
addpath(genpath(basepath));
savepath=fullfile(basepath,’SingleSubjectData’);
if ~exist(savepath,’dir’), mkdir(savepath); end
cd(‘SingleSubjectData’);
When i run the script I get the following error at the line cd(‘SingleSubjectData’)
Unrecognized function or variable ‘cd’.
However, if I set a breakpoint at ‘cd(‘SingleSubjectData’);’ and type the following, I get:
exist(‘cd’,’builtin’)
ans =
5
which(‘cd’)
ans =
built-in (C:Program FilesMATLABR2023btoolboxmatlabgeneralcd)
I’ve also successfully used addpath to the above directory with no errors, but even still I get the unrecognized function error.
addpath(‘C:Program FilesMATLABR2023btoolboxmatlabgeneral’)
I’ve tried different formats for cd, including adding and removing parentheses, semi-colons, and tildes and nothing has worked.
If I set a break point and type the following directly into the command window it works and changes directory,
cd ‘SingleSubjectData’
However, if I try to run the code with the above formatting within the script I get the following error:
horm_symptom(cvfn)
Error: File: horm_symptom.m Line: 21 Column: 1
Using identifier ‘cd’ as both a variable and a command is not supported. For more information, see "How MATLAB
Recognizes Command Syntax".
The above error occurs when I try any of the following formats:
cd SingleSubjectData;
cd SingleSubjectData
cd ‘SingleSubjectData’
cd ‘SingleSubjectData’;
cd ~/SingleSubjectData
I have checked that a variable named "cd" is not getting created, confirmed that the sub folder SingleSubjectData is getting created and does exist and reviewed the How Matlab recognizes command syntax page linked in the error. I am at a loss of how to fix this or whats wrong. Lastly, I know I’m talking about SingleSubjectData, but I have also tried cding into savepath, and have had all the same errors described above. Please help!Hello, I am having an issue where cd is not being found when I run a custom script. The relevant script is as follows, [tmp] being a placeholder that removes identifying information. cvfn is the name of a file that is located in basepath.
function horm_symptom(cvfn)
close all;
basepath={‘C:\Users\[tmp]_WorkingFolder’;
‘C:\Users\[tmp]_WorkingFolder’};
if exist(basepath{1},’dir’), basepath=basepath{1};
elseif exist(basepath{2},’dir’), basepath=basepath{2};
else, error(‘basepath does not exist’);
end
addpath(genpath(basepath));
savepath=fullfile(basepath,’SingleSubjectData’);
if ~exist(savepath,’dir’), mkdir(savepath); end
cd(‘SingleSubjectData’);
When i run the script I get the following error at the line cd(‘SingleSubjectData’)
Unrecognized function or variable ‘cd’.
However, if I set a breakpoint at ‘cd(‘SingleSubjectData’);’ and type the following, I get:
exist(‘cd’,’builtin’)
ans =
5
which(‘cd’)
ans =
built-in (C:Program FilesMATLABR2023btoolboxmatlabgeneralcd)
I’ve also successfully used addpath to the above directory with no errors, but even still I get the unrecognized function error.
addpath(‘C:Program FilesMATLABR2023btoolboxmatlabgeneral’)
I’ve tried different formats for cd, including adding and removing parentheses, semi-colons, and tildes and nothing has worked.
If I set a break point and type the following directly into the command window it works and changes directory,
cd ‘SingleSubjectData’
However, if I try to run the code with the above formatting within the script I get the following error:
horm_symptom(cvfn)
Error: File: horm_symptom.m Line: 21 Column: 1
Using identifier ‘cd’ as both a variable and a command is not supported. For more information, see "How MATLAB
Recognizes Command Syntax".
The above error occurs when I try any of the following formats:
cd SingleSubjectData;
cd SingleSubjectData
cd ‘SingleSubjectData’
cd ‘SingleSubjectData’;
cd ~/SingleSubjectData
I have checked that a variable named "cd" is not getting created, confirmed that the sub folder SingleSubjectData is getting created and does exist and reviewed the How Matlab recognizes command syntax page linked in the error. I am at a loss of how to fix this or whats wrong. Lastly, I know I’m talking about SingleSubjectData, but I have also tried cding into savepath, and have had all the same errors described above. Please help! Hello, I am having an issue where cd is not being found when I run a custom script. The relevant script is as follows, [tmp] being a placeholder that removes identifying information. cvfn is the name of a file that is located in basepath.
function horm_symptom(cvfn)
close all;
basepath={‘C:\Users\[tmp]_WorkingFolder’;
‘C:\Users\[tmp]_WorkingFolder’};
if exist(basepath{1},’dir’), basepath=basepath{1};
elseif exist(basepath{2},’dir’), basepath=basepath{2};
else, error(‘basepath does not exist’);
end
addpath(genpath(basepath));
savepath=fullfile(basepath,’SingleSubjectData’);
if ~exist(savepath,’dir’), mkdir(savepath); end
cd(‘SingleSubjectData’);
When i run the script I get the following error at the line cd(‘SingleSubjectData’)
Unrecognized function or variable ‘cd’.
However, if I set a breakpoint at ‘cd(‘SingleSubjectData’);’ and type the following, I get:
exist(‘cd’,’builtin’)
ans =
5
which(‘cd’)
ans =
built-in (C:Program FilesMATLABR2023btoolboxmatlabgeneralcd)
I’ve also successfully used addpath to the above directory with no errors, but even still I get the unrecognized function error.
addpath(‘C:Program FilesMATLABR2023btoolboxmatlabgeneral’)
I’ve tried different formats for cd, including adding and removing parentheses, semi-colons, and tildes and nothing has worked.
If I set a break point and type the following directly into the command window it works and changes directory,
cd ‘SingleSubjectData’
However, if I try to run the code with the above formatting within the script I get the following error:
horm_symptom(cvfn)
Error: File: horm_symptom.m Line: 21 Column: 1
Using identifier ‘cd’ as both a variable and a command is not supported. For more information, see "How MATLAB
Recognizes Command Syntax".
The above error occurs when I try any of the following formats:
cd SingleSubjectData;
cd SingleSubjectData
cd ‘SingleSubjectData’
cd ‘SingleSubjectData’;
cd ~/SingleSubjectData
I have checked that a variable named "cd" is not getting created, confirmed that the sub folder SingleSubjectData is getting created and does exist and reviewed the How Matlab recognizes command syntax page linked in the error. I am at a loss of how to fix this or whats wrong. Lastly, I know I’m talking about SingleSubjectData, but I have also tried cding into savepath, and have had all the same errors described above. Please help! error, built in function MATLAB Answers — New Questions
What filtration should be used for a respiratory signal between 5 and 60 breaths per minute?
I have applied LPF and HPF filtering, but for a low-frequency respiratory signal, the signal after HPF is distorted and edge detection is incorrect, as can be seen in the attached photo. Is it possible to use some kind of filtration that can handle the respiratory signal range of 5-60 breaths per minute? Is it better to use a findpeaks function with appropriate limitations and only LPF filtering as in diagram 2 in the attached photo?
% ————- LPF ——————————
N = 5; % Order
Fstop = 1.4; % Stopband Frequency
Astop = 30; % Stopband Attenuation (dB)
Fs = 25; % Sampling Frequency
h = fdesign.lowpass(‘n,fst,ast’, N, Fstop, Astop, Fs);
hfiltLP = design(h, ‘cheby2’, ‘SystemObject’, true);
% ————- HPF ——————————
N = 4; % Order
Fstop = 4/60; % Stopband Frequency min 4 oddechow na minute
Astop = 60; % Stopband Attenuation (dB)
Fs = 25; % Sampling Frequency
h = fdesign.highpass(‘n,fst,ast’, N, Fstop, Astop, Fs);
hfiltHP = design(h, ‘cheby2’, ‘SystemObject’, true);I have applied LPF and HPF filtering, but for a low-frequency respiratory signal, the signal after HPF is distorted and edge detection is incorrect, as can be seen in the attached photo. Is it possible to use some kind of filtration that can handle the respiratory signal range of 5-60 breaths per minute? Is it better to use a findpeaks function with appropriate limitations and only LPF filtering as in diagram 2 in the attached photo?
% ————- LPF ——————————
N = 5; % Order
Fstop = 1.4; % Stopband Frequency
Astop = 30; % Stopband Attenuation (dB)
Fs = 25; % Sampling Frequency
h = fdesign.lowpass(‘n,fst,ast’, N, Fstop, Astop, Fs);
hfiltLP = design(h, ‘cheby2’, ‘SystemObject’, true);
% ————- HPF ——————————
N = 4; % Order
Fstop = 4/60; % Stopband Frequency min 4 oddechow na minute
Astop = 60; % Stopband Attenuation (dB)
Fs = 25; % Sampling Frequency
h = fdesign.highpass(‘n,fst,ast’, N, Fstop, Astop, Fs);
hfiltHP = design(h, ‘cheby2’, ‘SystemObject’, true); I have applied LPF and HPF filtering, but for a low-frequency respiratory signal, the signal after HPF is distorted and edge detection is incorrect, as can be seen in the attached photo. Is it possible to use some kind of filtration that can handle the respiratory signal range of 5-60 breaths per minute? Is it better to use a findpeaks function with appropriate limitations and only LPF filtering as in diagram 2 in the attached photo?
% ————- LPF ——————————
N = 5; % Order
Fstop = 1.4; % Stopband Frequency
Astop = 30; % Stopband Attenuation (dB)
Fs = 25; % Sampling Frequency
h = fdesign.lowpass(‘n,fst,ast’, N, Fstop, Astop, Fs);
hfiltLP = design(h, ‘cheby2’, ‘SystemObject’, true);
% ————- HPF ——————————
N = 4; % Order
Fstop = 4/60; % Stopband Frequency min 4 oddechow na minute
Astop = 60; % Stopband Attenuation (dB)
Fs = 25; % Sampling Frequency
h = fdesign.highpass(‘n,fst,ast’, N, Fstop, Astop, Fs);
hfiltHP = design(h, ‘cheby2’, ‘SystemObject’, true); respiratory rate, respiration, filtering MATLAB Answers — New Questions
Where shall I start if i would like to learn IT and move onto cybersecurity?
Which learning paths and certifications will be recommended for someone trying to learn cloud security for the SOC, IR ?
By the way hi my name is Enyel.
Which learning paths and certifications will be recommended for someone trying to learn cloud security for the SOC, IR ?By the way hi my name is Enyel. Read More
So many environments for Planner but little ability to share.
Ok, we’ve been bashing our heads trying to figure out where things end up when creating plans using Planner Premium.
If I go to Planner and create a Plan it is stored in project.microsoft.com which is accessible to both Planner and Project Online (Planner web). I can link that Plan to a channel and it appears in the tabs. What I can’t do is add it to the Channel’s HomePage (add existing plan is greyed out). Nor can I access it with Project Desktop (PWA).
If I create a new plan from the HomePage, it is added to the HomePage, I can view it from Planner in Teams. I can go to planner.cloud.microsoft and it will be visible there. I can’t see it with Planner Online (project.microsoft.com) nor can I see it via Project Desktop (PWA).
If I create a plan with Project Desktop (PWA) I can open it with Project Online once it is published but I can’t see it on Planner in Teams or Online. I can add it to the Tabs in Teams however but can’t add it to the Channel HomePage (Project is no longer listed in Toolbox Web parts, just Planner).
I can’t copy paste plans or selected tasks between environments. I can export plans but where is the import feature for Planner/Project Online (planner.cloud.microsoft)?
Is it because of misconfiguration /permissions or it was planned this way? Is there a way to allow Plans stored in project.microsoft.com to be posted in HomePage? I would also love to have Project and RoadMap back too. It all feels very disjointed. Too much Agile not enough Waterfall.
Ok, we’ve been bashing our heads trying to figure out where things end up when creating plans using Planner Premium. If I go to Planner and create a Plan it is stored in project.microsoft.com which is accessible to both Planner and Project Online (Planner web). I can link that Plan to a channel and it appears in the tabs. What I can’t do is add it to the Channel’s HomePage (add existing plan is greyed out). Nor can I access it with Project Desktop (PWA). If I create a new plan from the HomePage, it is added to the HomePage, I can view it from Planner in Teams. I can go to planner.cloud.microsoft and it will be visible there. I can’t see it with Planner Online (project.microsoft.com) nor can I see it via Project Desktop (PWA).If I create a plan with Project Desktop (PWA) I can open it with Project Online once it is published but I can’t see it on Planner in Teams or Online. I can add it to the Tabs in Teams however but can’t add it to the Channel HomePage (Project is no longer listed in Toolbox Web parts, just Planner).I can’t copy paste plans or selected tasks between environments. I can export plans but where is the import feature for Planner/Project Online (planner.cloud.microsoft)?Is it because of misconfiguration /permissions or it was planned this way? Is there a way to allow Plans stored in project.microsoft.com to be posted in HomePage? I would also love to have Project and RoadMap back too. It all feels very disjointed. Too much Agile not enough Waterfall. Read More
Using lists to display timetables
Hi,
I’ve created a timetable to track student lessons at my school (what subject and which staff – selected from lists) I have all students tracked on the same table, and a sheet for each day of the week (eg Mon – Fri)
what I’d really like to add now is a sheet where I can select a students name from a list, and generate a full week timetable just for that student. I’ve tried to work out how to do that, but I’m struggling!
if possible I’d then like to do the same for staff members to show which lessons and students they’re supporting – but think that may be more difficult!
Happy to attach where I’m up to if it helps!
Hi,I’ve created a timetable to track student lessons at my school (what subject and which staff – selected from lists) I have all students tracked on the same table, and a sheet for each day of the week (eg Mon – Fri)what I’d really like to add now is a sheet where I can select a students name from a list, and generate a full week timetable just for that student. I’ve tried to work out how to do that, but I’m struggling!if possible I’d then like to do the same for staff members to show which lessons and students they’re supporting – but think that may be more difficult!Happy to attach where I’m up to if it helps! Read More
How to Validate Input Sum Equals 100% Before Submitting an Adaptive Card in Power Automate for Teams
Hello everyone,
I’m working on a Power Automate flow where I need to collect three percentage values (W1, W2, and W3) from users via an Adaptive Card in Microsoft Teams. My main challenge is that I need to validate that the sum of these three inputs equals 100% before the user can submit the card. If the sum is not 100%, the card should prompt the user to adjust their inputs accordingly.
I’m using the Teams action “Post an Adaptive Card to a Teams channel and wait for a response” in Power Automate to generate and post this card.
The issue I’m facing is:
I cannot reference or access the inputted values within the Adaptive Card to perform the validation before submission.
I need a way to prevent the user from submitting the card unless W1 + W2 + W3 equals 100%.
Ideally, the card would display an error message or disable the “Submit” button until the validation condition is met.
I’ve looked into Adaptive Card expressions but haven’t found a solution that allows for validating user input within the card itself before submission.
Any guidance or suggestions would be greatly appreciated!
Thank you in advance for your help.
Hello everyone,I’m working on a Power Automate flow where I need to collect three percentage values (W1, W2, and W3) from users via an Adaptive Card in Microsoft Teams. My main challenge is that I need to validate that the sum of these three inputs equals 100% before the user can submit the card. If the sum is not 100%, the card should prompt the user to adjust their inputs accordingly.I’m using the Teams action “Post an Adaptive Card to a Teams channel and wait for a response” in Power Automate to generate and post this card.The issue I’m facing is:I cannot reference or access the inputted values within the Adaptive Card to perform the validation before submission.I need a way to prevent the user from submitting the card unless W1 + W2 + W3 equals 100%.Ideally, the card would display an error message or disable the “Submit” button until the validation condition is met.I’ve looked into Adaptive Card expressions but haven’t found a solution that allows for validating user input within the card itself before submission.Any guidance or suggestions would be greatly appreciated!Thank you in advance for your help. Read More
HPC Lift and Shift Cloud Migration: Architecture and Best Practices
Have you ever wondered what it takes to have an enterprise level HPC environment in the Cloud? What components should be in place and what steps should be taken to move from an on-premises environment to a Cloud environment? And what are the best practices in this process? Everything starts with a Proof-of-Concept (PoC), in which an organization assesses how the key applications will perform in the Cloud, considering not only performance but also the costs involved. Once a decision is made, it is important to understand what it takes to have an enterprise level HPC Cloud environment.
Based on our experience with various clients, partners, and product groups, we have put together a comprehensive documentation on HPC lift and shift Cloud migration and this blog post gives an overview on what we cover in the document. Feedback is always welcome, as we will keep improving the documentation over time.
TL;DR
– We just made available a detailed documentation on HPC lift and shift cloud migration, containing components, steps, examples, and best practices. We also provide references for products, code repositories, and blog posts.
– Documentation can be accessed here: https://learn.microsoft.com/en-us/azure/high-performance-computing/lift-and-shift-overview
DOCUMENTATION OVERVIEW
Here we provide an overview of the documentation: LINK
On-premises. We start the document by describing what a typical on-premises HPC environment looks like, which includes compute nodes, job schedulers like SLURM, PBS, or LSF, identity management, storage options, and monitoring tools, all hosted within a private network.
Personas. After discussing the on-premises environment, we talk about the personas. From our experience, we observe a lot of discussion on what changes and what does not change for all people involved when moving from on-premises to the Cloud. We discuss their responsibilities and new tasks in an HPC Cloud setup, considering four personas:
– End-user (engineer / scientist / researcher)
– HPC administrator
– Cloud administrator
– Business manager / owner
HPC Cloud target architecture. The next discussion is an overview of the target HPC Cloud architecture, which highlights that there is not much change compared to an on-premises environment in terms of the conceptual components involved. One of the key differentiators is that resources are allocated on demand, allowing users to access more resources as needed.
Migration guide. After a brief discussion on exploring the Cloud environment through a Proof-of-Concept (PoC), we dive deep into the migration guide itself. We have broken the guide into five steps.
Basic infrastructure. The focus here is on setting up resource groups, networking, and basic storage, which serve as the backbone of a successful HPC lift-and-shift deployment;
Base services. This section covers the core components related to the job scheduler, including the resource orchestrator for provisioning and setting up resources, identity management for user authentication, monitoring (including node health checks), and accounting to better understand the status and usage of resources. Each component plays a crucial role in ensuring the performance, scalability, and security of the HPC environment.
Storage. This section highlights the critical considerations for managing storage in an HPC cloud environment, focusing on the variety of cloud storage options and the processes for migrating data. Also, it offers practical guidance for setting up storage and managing data migration, with an emphasis on scalability and automation as the HPC environment evolves.
Compute nodes. This section provides guidance on selecting and managing compute resources efficiently for HPC workloads in the cloud, including some recommendations and pointers on VM images.
End user entry point. This section explores the options for user interaction, emphasizing the importance of addressing potential latency issues that may arise when moving to the cloud. It also provides guidance on tools, services, and best practices optimizing the user entry point for HPC lift-and-shift deployments. A quick start setup is included to help establish this component efficiently, with the goal of automating it as the cloud infrastructure matures.
WHAT IS NEXT?
We will continue to improve and expand the documentation on this topic as new services, products, and learnings become available. The documentation is not targeted to cover all the possible deployments in the Cloud, but provide guidance based on patterns we observe in how customers use the Cloud to run their HPC workloads. If there is any subject on which more details are required, please send us a note!
LINK TO FULL DOCUMENTATION
https://learn.microsoft.com/en-us/azure/high-performance-computing/lift-and-shift-overview
#AzureHPCAI
Microsoft Tech Community – Latest Blogs –Read More