Month: July 2024
Windows Device Enrolment
Hello Team,
We have licenses for M365 Business Basic and Business Standard, along with Intune Plan 1. In our organization, we manage both corporate-owned and BYOD devices and need to enroll them in Intune. However, since we lack EntraID P1 (AAD P1) licenses, automatic enrollment isn’t available to us. Could you please advise on the best method to enroll our devices and utilize Intune’s full capabilities? Your urgent support on this matter would be greatly appreciated. Thank you.
@steven hosking @ASquareDozen @Joe Lurie
Hello Team, We have licenses for M365 Business Basic and Business Standard, along with Intune Plan 1. In our organization, we manage both corporate-owned and BYOD devices and need to enroll them in Intune. However, since we lack EntraID P1 (AAD P1) licenses, automatic enrollment isn’t available to us. Could you please advise on the best method to enroll our devices and utilize Intune’s full capabilities? Your urgent support on this matter would be greatly appreciated. Thank you.@steven hosking @ASquareDozen @Joe Lurie Read More
Upload to a restricted cloud service domain or access from an unallowed browser
-The action in DLP rules “Upload to a restricted cloud service domain or access from an unallowed browser” does not seem to be working as expected.
-Currently a number of policies are meant to detect certain sensitivity labels as well as certain information types and among the actions taken to restrict data/files being shared, Is the action named above.
-The activity explorer shows the policy match but the enforcement action is always audit instead of block(which is specified in the policy)
-Service domains and domain groups are added with an action of block in DLP settings.
-Unallowed browser also specified.
What could be the issue here? Any Ideas?
-The action in DLP rules “Upload to a restricted cloud service domain or access from an unallowed browser” does not seem to be working as expected.-Currently a number of policies are meant to detect certain sensitivity labels as well as certain information types and among the actions taken to restrict data/files being shared, Is the action named above.-The activity explorer shows the policy match but the enforcement action is always audit instead of block(which is specified in the policy)-Service domains and domain groups are added with an action of block in DLP settings.-Unallowed browser also specified. What could be the issue here? Any Ideas? Read More
Unable to Connect to Bank Account: Quick-Books Error 103
I am encountering an issue with Q.B and need some help. Whenever I try to sign in to my bank account through Q.B, I receive an error message stating “Error 103: The information entered does not match your bank’s records.”
I have already double-checked my login credentials and confirmed they are correct by logging directly into my bank’s website. Despite this, the error persists in Quick-Books.
Could you please provide guidance on how to resolve this error? Any steps or troubleshooting tips would be greatly appreciated.
I am encountering an issue with Q.B and need some help. Whenever I try to sign in to my bank account through Q.B, I receive an error message stating “Error 103: The information entered does not match your bank’s records.”I have already double-checked my login credentials and confirmed they are correct by logging directly into my bank’s website. Despite this, the error persists in Quick-Books.Could you please provide guidance on how to resolve this error? Any steps or troubleshooting tips would be greatly appreciated. Read More
Building Intelligent Applications with Local RAG in .NET and Phi-3: A Hands-On Guide
Hi!
In this blog post we will learn how to do Retrieval Augmented Generation (RAG) using local resources in .NET! We’ll show you how to combine the Phi-3 language model, Local Embeddings, and Semantic Kernel to create a RAG scenario.
What is RAG?
Before we dive into the demo, let’s quickly recap what RAG is. RAG is a hybrid approach that enhances the capabilities of a language model by incorporating external knowledge. In example: using a RAG approach we can retrieve relevant documents from a knowledge base and use them to generate more informed and accurate responses. This is particularly useful in scenarios where a LLM needs up-to-date information or specific domain knowledge that isn’t contained within its initial training data.
The Components
To build our RAG setup, we’ll be using the following components:
Phi-3: Our local LLM, which is a powerful tool for generating human-like text. Check out the Phi-3 Cookbook for more details.
Smart Components Local Embeddings: This package will help us create embeddings, which are numerical representations of text that capture its semantic meaning. You can learn more about it in the Smart Components Local Embeddings documentation.
Semantic Kernel: This acts as the main orchestrator, integrating Phi-3 and Smart Components to create a seamless RAG pipeline. Visit the Semantic Kernel GitHub page for more information.
Demo Scenario
The demo scenario below is designed to answer a specific question, “What is Bruno’s favourite super hero?“, using two different approaches.
Ask the question directly to the Phi-3 model. The model will answer declining a response, Phi-3 does not talk about Bruno.
Ask the question to the Phi-3 model, and add a semantic memory object with fan facts loaded. Now the response will be based on the semantic memory content.
This is the app running:
Code Sample
Let’s jump to the code. The code below is a C# console application that demonstrates the use of a local model hosted in Ollama and semantic memory for search.
Here’s a step-by-step breakdown of the program:
The program starts by defining the question and announcing the two approaches it will use to answer it. The first approach is to ask the question directly to the Phi-3 model, and the second approach is to add facts to a semantic memory and ask the question again.
The program creates a chat completion service using the Kernel.CreateBuilder() method. It adds Chat Completion using a local model, and local text embedding generation to the builder, then builds the kernel.
The program then asks the question directly to the Phi-3 model and prints the response.
The program gets the embeddings generator service and creates a new semantic text memory with a volatile memory store and the embedding generator.
The program adds facts to the memory collection. These facts are about Bruno and Gisela’s favourite super heroes and the last super hero movies they watched.
The program creates a new text memory plugin with the semantic text memory and imports the plugin into the kernel.
The program sets up the prompt execution settings and the kernel arguments, which include the question and the memory collection.
Finally, the program asks the question again, this time using the semantic memory, and prints the response.
The program uses several external libraries, including:
Microsoft.Extensions.Configuration and Microsoft.Extensions.DependencyInjection for dependency injection and configuration.
Microsoft.KernelMemory, Microsoft.SemanticKernel, Microsoft.SemanticKernel.ChatCompletion, Microsoft.SemanticKernel.Connectors.OpenAI, Microsoft.SemanticKernel.Embeddings, Microsoft.SemanticKernel.Memory, and Microsoft.SemanticKernel.Plugins.Memory for the semantic kernel and memory functionalities.
This program is a great example of how AI can be used to answer questions using both direct model querying and semantic memory.
// Copyright (c) 2024
// Author : Bruno Capuano
// Change Log :
// – Sample console application to use a local model hosted in ollama and semantic memory for search
//
// The MIT License (MIT)
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the “Software”), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
// THE SOFTWARE.
#pragma warning disable SKEXP0001
#pragma warning disable SKEXP0003
#pragma warning disable SKEXP0010
#pragma warning disable SKEXP0011
#pragma warning disable SKEXP0050
#pragma warning disable SKEXP0052
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.KernelMemory;
using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.ChatCompletion;
using Microsoft.SemanticKernel.Connectors.OpenAI;
using Microsoft.SemanticKernel.Embeddings;
using Microsoft.SemanticKernel.Memory;
using Microsoft.SemanticKernel.Plugins.Memory;
var question = “What is Bruno’s favourite super hero?”;
Console.WriteLine($”This program will answer the following question: {question}”);
Console.WriteLine(“1st approach will be to ask the question directly to the Phi-3 model.”);
Console.WriteLine(“2nd approach will be to add facts to a semantic memory and ask the question again”);
Console.WriteLine(“”);
// Create a chat completion service
var builder = Kernel.CreateBuilder();
builder.AddOpenAIChatCompletion(
modelId: “phi3”,
endpoint: new Uri(“http://localhost:11434”),
apiKey: “apikey”);
builder.AddLocalTextEmbeddingGeneration();
Kernel kernel = builder.Build();
Console.WriteLine($”Phi-3 response (no memory).”);
var response = kernel.InvokePromptStreamingAsync(question);
await foreach (var result in response)
{
Console.Write(result);
}
// separator
Console.WriteLine(“”);
Console.WriteLine(“==============”);
Console.WriteLine(“”);
// get the embeddings generator service
var embeddingGenerator = kernel.Services.GetRequiredService<ITextEmbeddingGenerationService>();
var memory = new SemanticTextMemory(new VolatileMemoryStore(), embeddingGenerator);
// add facts to the collection
const string MemoryCollectionName = “fanFacts”;
await memory.SaveInformationAsync(MemoryCollectionName, id: “info1”, text: “Gisela’s favourite super hero is Batman”);
await memory.SaveInformationAsync(MemoryCollectionName, id: “info2”, text: “The last super hero movie watched by Gisela was Guardians of the Galaxy Vol 3”);
await memory.SaveInformationAsync(MemoryCollectionName, id: “info3”, text: “Bruno’s favourite super hero is Invincible”);
await memory.SaveInformationAsync(MemoryCollectionName, id: “info4”, text: “The last super hero movie watched by Bruno was Aquaman II”);
await memory.SaveInformationAsync(MemoryCollectionName, id: “info5”, text: “Bruno don’t like the super hero movie: Eternals”);
TextMemoryPlugin memoryPlugin = new(memory);
// Import the text memory plugin into the Kernel.
kernel.ImportPluginFromObject(memoryPlugin);
OpenAIPromptExecutionSettings settings = new()
{
ToolCallBehavior = ToolCallBehavior.AutoInvokeKernelFunctions,
};
var prompt = @”
Question: {{$input}}
Answer the question using the memory content: {{Recall}}”;
var arguments = new KernelArguments(settings)
{
{ “input”, question },
{ “collection”, MemoryCollectionName }
};
Console.WriteLine($”Phi-3 response (using semantic memory).”);
response = kernel.InvokePromptStreamingAsync(prompt, arguments);
await foreach (var result in response)
{
Console.Write(result);
}
Console.WriteLine($””);
The full source code is available here: Program.cs.
Test This Scenario for Free Using CodeSpaces in the Phi-3 Cookbook
To help you get started with Phi-3 and experience its capabilities firsthand, we are thrilled to introduce the support of Codespaces in the Phi-3 Cookbook.
The C# Ollama Labs are designed to test Phi-3 with C# samples directly in GitHub Codespaces as an easy way for anyone to try out Phi-3 with C# entirely in the browser.
Check the guide here: Phi-3CookBook/md/07.Labs/CsharpOllamaCodeSpaces/CsharpOllamaCodeSpaces.md at main · microsoft/Phi-3CookBook (github.com)
Conclusion
Phi-3, local embeddings and Semantic Kernel are a great combination to support RAG scenarios in local mode.
And using Semantic Kernel is easy to later switch to Azure OpenAI Services to scale at Enterprise level!
Happy Coding!
Bruno Capuano
Microsoft Tech Community – Latest Blogs –Read More
create a vector of all the odds positive integers smaller than 100 in increasing order to save it into a variable
Hi im new student in matlab, i try to do this exercise but i dont understand the thing size, the test asking me for a size [1 50] and say is currently [1 99], the code i wrote was:
odds = 1:1:99
thank youHi im new student in matlab, i try to do this exercise but i dont understand the thing size, the test asking me for a size [1 50] and say is currently [1 99], the code i wrote was:
odds = 1:1:99
thank you Hi im new student in matlab, i try to do this exercise but i dont understand the thing size, the test asking me for a size [1 50] and say is currently [1 99], the code i wrote was:
odds = 1:1:99
thank you odds, colon, homework MATLAB Answers — New Questions
xmlwrite – Control the order of attributes
Hi,
I wrote the following script to generate a XML-file…
xml_doc = com.mathworks.xml.XMLUtils.createDocument(‘Node’);
root = xml_doc.getDocumentElement();
tool_elem = xml_doc.createElement(‘Tool’);
tool_elem.setAttribute(‘name’,’me’);
tool_elem.setAttribute(‘defaultValue’,’1122′);
root.appendChild(tool_elem);
disp (xmlwrite(xml_doc));
… and I get the following result:
<?xml version="1.0" encoding="utf-8"?>
<Node>
<Tool defaultValue="1122" name="me"/>
</Node>
I know that the order is irrelevant from the point of the XML specification, but I would like to have the attribute "name" before "defaultValue" for readability.
Can I modify the order of the attributes?Hi,
I wrote the following script to generate a XML-file…
xml_doc = com.mathworks.xml.XMLUtils.createDocument(‘Node’);
root = xml_doc.getDocumentElement();
tool_elem = xml_doc.createElement(‘Tool’);
tool_elem.setAttribute(‘name’,’me’);
tool_elem.setAttribute(‘defaultValue’,’1122′);
root.appendChild(tool_elem);
disp (xmlwrite(xml_doc));
… and I get the following result:
<?xml version="1.0" encoding="utf-8"?>
<Node>
<Tool defaultValue="1122" name="me"/>
</Node>
I know that the order is irrelevant from the point of the XML specification, but I would like to have the attribute "name" before "defaultValue" for readability.
Can I modify the order of the attributes? Hi,
I wrote the following script to generate a XML-file…
xml_doc = com.mathworks.xml.XMLUtils.createDocument(‘Node’);
root = xml_doc.getDocumentElement();
tool_elem = xml_doc.createElement(‘Tool’);
tool_elem.setAttribute(‘name’,’me’);
tool_elem.setAttribute(‘defaultValue’,’1122′);
root.appendChild(tool_elem);
disp (xmlwrite(xml_doc));
… and I get the following result:
<?xml version="1.0" encoding="utf-8"?>
<Node>
<Tool defaultValue="1122" name="me"/>
</Node>
I know that the order is irrelevant from the point of the XML specification, but I would like to have the attribute "name" before "defaultValue" for readability.
Can I modify the order of the attributes? xmlwrite, attribute MATLAB Answers — New Questions
CZUZM – Earn a $20 sign up bonus and 30% referral commission
What is AttaPoll?
AttaPoll is a mobile app that pays users for completing surveys. It connects businesses and
market researchers with individuals willing to share their opinions and experiences. AttaPoll
offers a simple way for companies to gather data and insights while providing a way for users to
earn extra cash
How Much Do Surveys Pay on AttaPoll?
Survey payouts on AttaPoll vary depending on the length and complexity. Generally, shorter
surveys pay around $0.25 to $0.50, while longer surveys can pay up to $5.00 or more. Users
are notified of the amount they will be paid before starting a survey
What is the AttaPoll Referral Program?
The AttaPoll Refer and Earn program allows users to earn 10% of the money their referred
friends make by completing surveys on the app. There is no upper limit to the amount that can
be earned through referrals
How Does the AttaPoll Referral Program Work?
To participate, users invite friends to download and use the AttaPoll app. Once their friends sign
up and complete surveys, the user earns 10% of the money their friend makes
Is There a Limit to How Many Friends I Can Refer?
No, there is no limit to how many friends a user can refer to AttaPoll. The more friends referred,
the more potential earnings through the Refer and Earn program
How Do I Refer Friends to AttaPoll?
Users can refer friends by sharing their unique referral link found in the “Refer and Earn” section
of the app
How Do I Get Paid on AttaPoll?
Users can receive payments through PayPal or redeem earnings for gift cards to retailers like
Amazon or iTunes. The minimum payout threshold for PayPal is $3.00, while gift cards start at
$5.00[4].
AttaPoll Referral Codes
Some popular AttaPoll referral codes include:
– CZUZM – Earn a $20 sign up bonus and 30% referral commission
– CZUZM – Earn a 10% commission on referred friends’ survey earnings
– CZUZM – Earn a $20 sign up bonus and 15% referral commission
In summary, the AttaPoll referral program allows users to earn extra cash by inviting friends to
complete surveys. Referral codes provide additional bonuses and commissions. The app offers
a simple way to earn money by sharing opinions.
What is AttaPoll?AttaPoll is a mobile app that pays users for completing surveys. It connects businesses andmarket researchers with individuals willing to share their opinions and experiences. AttaPolloffers a simple way for companies to gather data and insights while providing a way for users toearn extra cashHow Much Do Surveys Pay on AttaPoll?Survey payouts on AttaPoll vary depending on the length and complexity. Generally, shortersurveys pay around $0.25 to $0.50, while longer surveys can pay up to $5.00 or more. Usersare notified of the amount they will be paid before starting a surveyWhat is the AttaPoll Referral Program?The AttaPoll Refer and Earn program allows users to earn 10% of the money their referredfriends make by completing surveys on the app. There is no upper limit to the amount that canbe earned through referralsHow Does the AttaPoll Referral Program Work?To participate, users invite friends to download and use the AttaPoll app. Once their friends signup and complete surveys, the user earns 10% of the money their friend makesIs There a Limit to How Many Friends I Can Refer?No, there is no limit to how many friends a user can refer to AttaPoll. The more friends referred,the more potential earnings through the Refer and Earn programHow Do I Refer Friends to AttaPoll?Users can refer friends by sharing their unique referral link found in the “Refer and Earn” sectionof the appHow Do I Get Paid on AttaPoll?Users can receive payments through PayPal or redeem earnings for gift cards to retailers likeAmazon or iTunes. The minimum payout threshold for PayPal is $3.00, while gift cards start at$5.00[4].AttaPoll Referral CodesSome popular AttaPoll referral codes include:- CZUZM – Earn a $20 sign up bonus and 30% referral commission- CZUZM – Earn a 10% commission on referred friends’ survey earnings- CZUZM – Earn a $20 sign up bonus and 15% referral commissionIn summary, the AttaPoll referral program allows users to earn extra cash by inviting friends tocomplete surveys. Referral codes provide additional bonuses and commissions. The app offersa simple way to earn money by sharing opinions. Read More
New and Free Azure Arc Applied Skill Credential
Microsoft recently launched a brand new Applied Skill Credential related to deploying and managing Azure Arc enabled servers. Here’s me talking about it:
An applied skill is a hands on test where you perform a set of real world tasks based on a couple of scenarios.
Azure Arc is a set of technologies from Microsoft that extends Azure management and services to workloads in multi-cloud and on-premises environments. Using Arc, you can manage and govern on-premises, edge, and multi-cloud environments from a single control plane in Azure.
Taking this 60 minute long free practical exam gives you a credential that’s useful if want to demonstrate that you have proficiency in connecting Windows Server computers in hybrid environments to Azure Arc, as well as using Arc to secure, manage, and maintain those connected computers.
Specific skills include:
Deploy Azure resources by using an Azure Resource Manager template.
Implement the operating system prerequisites for connecting Azure VMs to Azure Arc.
Prepare for connecting on-premises servers to Azure Arc.
Connect a Windows Server to Azure Arc by using Windows Admin Center.
Connect Windows servers to Azure Arc non-interactively at scale.
Create a policy assignment for Azure Arc-enabled Windows servers.
Evaluate results of the policy assignment.
Configure Microsoft Defender for Cloud-based protection of Azure Arc-enabled Windows servers.
Review the Microsoft Defender for Cloud-based protection of Azure Arc-enabled Windows servers.
Configure VM Insights for Azure Arc-enabled Windows servers.
Review the monitoring capabilities of Azure Arc-enabled Windows servers.
Configure Update Manager for Azure Arc-enabled Windows servers.
Review Update Manager capabilities for Azure Arc-enabled Windows servers.
You can prepare for this credential by working through a seven module learning path on Microsoft Learn. The final module in this learning path involves creating a hands on lab and using your own Azure subscription to configure and manage Arc enabled servers. You can find the learning path here: https://learn.microsoft.com/en-us/training/paths/deploy-manage-azure-arc-enabled-servers/
When you are ready to take the credential, you can do so for free by navigating to the following page:
Thanks for your attention and good luck on achieving the credential!
Microsoft Tech Community – Latest Blogs –Read More
I am implementing forward neural network for prediction while taking weights from patternnet trained model
Dir = ‘.’;
outputFile = fullfile(Dir, ‘net_test1.mat’);
load(outputFile, ‘TrainedNet’);
%%
ih1w = TrainedNet.IW{ 1, 1 };
h1h2w = TrainedNet.LW{ 2, 1 };
h2ow = TrainedNet.LW{ 3, 2 };
h1b = TrainedNet.b{1};
h2b = TrainedNet.b{2};
ob = TrainedNet.b{3};
%%
maxx = TrainedNet.inputs{1}.processSettings{1,1}.xmax;
minx = TrainedNet.inputs{1}.processSettings{1,1}.xmin;
gain = TrainedNet.inputs{1}.processSettings{1,1}.gain;
rangex = TrainedNet.inputs{1}.processSettings{1,1}.xrange;
offset = TrainedNet.inputs{1}.processSettings{1,1}.xoffset;
TrainedNet.inputs{1}.processSettings{1,1}
%%
function y = tanh(x)
y = (2 / (1 + exp(-2 * x))) – 1;
end
function y = sigmoid(x)
y = 1 / (1 + exp(-x));
end
inputlayer = ones(1,1036);
inputlayer = inputlayer’;
inputlayer_normalized = [];
for x = 1:1036
inputlayer_normalized(x) = (inputlayer(x)-offset(x))*gain(x);
end
% Initialize variables
h1size = size(ih1w, 1);
inputsize = size(ih1w, 2);
h2size = size(h1h2w, 1);
outputsize = size(h2ow, 1);
% First hidden layer computation
hl1 = zeros(1, h1size);
for k = 0:h1size-1
sum = 0;
for i = 0:inputsize-1
sum = sum + (ih1w(k+1, i+1) * inputlayer_normalized(i+1));
end
sum = sum + h1b(k+1);
hl1(k+1) = tanh(sum);
end
% Second hidden layer computation
hl2 = zeros(1, h2size);
for k = 0:h2size-1
hl2(k+1) = 0;
for i = 0:h1size-1
hl2(k+1) = hl2(k+1) + (h1h2w(k+1, i+1) * hl1(i+1));
end
hl2(k+1) = hl2(k+1) + h2b(k+1);
hl2(k+1) = tanh(hl2(k+1));
end
% Output layer computation
ol = zeros(1, outputsize);
for k = 0:outputsize-1
ol(k+1) = 0;
for i = 0:h2size-1
ol(k+1) = ol(k+1) + (h2ow(k+1, i+1) * hl2(i+1));
end
ol(k+1) = ol(k+1) + ob(k+1);
ol(k+1) = sigmoid(ol(k+1));
end
Ipred = TrainedNet(inputlayer);
this is code what i am implementing above neural network trained from inbuild function patternnet in matlab
I am using its weights and preprocess
but i am not getting same output in variable ol and IpredDir = ‘.’;
outputFile = fullfile(Dir, ‘net_test1.mat’);
load(outputFile, ‘TrainedNet’);
%%
ih1w = TrainedNet.IW{ 1, 1 };
h1h2w = TrainedNet.LW{ 2, 1 };
h2ow = TrainedNet.LW{ 3, 2 };
h1b = TrainedNet.b{1};
h2b = TrainedNet.b{2};
ob = TrainedNet.b{3};
%%
maxx = TrainedNet.inputs{1}.processSettings{1,1}.xmax;
minx = TrainedNet.inputs{1}.processSettings{1,1}.xmin;
gain = TrainedNet.inputs{1}.processSettings{1,1}.gain;
rangex = TrainedNet.inputs{1}.processSettings{1,1}.xrange;
offset = TrainedNet.inputs{1}.processSettings{1,1}.xoffset;
TrainedNet.inputs{1}.processSettings{1,1}
%%
function y = tanh(x)
y = (2 / (1 + exp(-2 * x))) – 1;
end
function y = sigmoid(x)
y = 1 / (1 + exp(-x));
end
inputlayer = ones(1,1036);
inputlayer = inputlayer’;
inputlayer_normalized = [];
for x = 1:1036
inputlayer_normalized(x) = (inputlayer(x)-offset(x))*gain(x);
end
% Initialize variables
h1size = size(ih1w, 1);
inputsize = size(ih1w, 2);
h2size = size(h1h2w, 1);
outputsize = size(h2ow, 1);
% First hidden layer computation
hl1 = zeros(1, h1size);
for k = 0:h1size-1
sum = 0;
for i = 0:inputsize-1
sum = sum + (ih1w(k+1, i+1) * inputlayer_normalized(i+1));
end
sum = sum + h1b(k+1);
hl1(k+1) = tanh(sum);
end
% Second hidden layer computation
hl2 = zeros(1, h2size);
for k = 0:h2size-1
hl2(k+1) = 0;
for i = 0:h1size-1
hl2(k+1) = hl2(k+1) + (h1h2w(k+1, i+1) * hl1(i+1));
end
hl2(k+1) = hl2(k+1) + h2b(k+1);
hl2(k+1) = tanh(hl2(k+1));
end
% Output layer computation
ol = zeros(1, outputsize);
for k = 0:outputsize-1
ol(k+1) = 0;
for i = 0:h2size-1
ol(k+1) = ol(k+1) + (h2ow(k+1, i+1) * hl2(i+1));
end
ol(k+1) = ol(k+1) + ob(k+1);
ol(k+1) = sigmoid(ol(k+1));
end
Ipred = TrainedNet(inputlayer);
this is code what i am implementing above neural network trained from inbuild function patternnet in matlab
I am using its weights and preprocess
but i am not getting same output in variable ol and Ipred Dir = ‘.’;
outputFile = fullfile(Dir, ‘net_test1.mat’);
load(outputFile, ‘TrainedNet’);
%%
ih1w = TrainedNet.IW{ 1, 1 };
h1h2w = TrainedNet.LW{ 2, 1 };
h2ow = TrainedNet.LW{ 3, 2 };
h1b = TrainedNet.b{1};
h2b = TrainedNet.b{2};
ob = TrainedNet.b{3};
%%
maxx = TrainedNet.inputs{1}.processSettings{1,1}.xmax;
minx = TrainedNet.inputs{1}.processSettings{1,1}.xmin;
gain = TrainedNet.inputs{1}.processSettings{1,1}.gain;
rangex = TrainedNet.inputs{1}.processSettings{1,1}.xrange;
offset = TrainedNet.inputs{1}.processSettings{1,1}.xoffset;
TrainedNet.inputs{1}.processSettings{1,1}
%%
function y = tanh(x)
y = (2 / (1 + exp(-2 * x))) – 1;
end
function y = sigmoid(x)
y = 1 / (1 + exp(-x));
end
inputlayer = ones(1,1036);
inputlayer = inputlayer’;
inputlayer_normalized = [];
for x = 1:1036
inputlayer_normalized(x) = (inputlayer(x)-offset(x))*gain(x);
end
% Initialize variables
h1size = size(ih1w, 1);
inputsize = size(ih1w, 2);
h2size = size(h1h2w, 1);
outputsize = size(h2ow, 1);
% First hidden layer computation
hl1 = zeros(1, h1size);
for k = 0:h1size-1
sum = 0;
for i = 0:inputsize-1
sum = sum + (ih1w(k+1, i+1) * inputlayer_normalized(i+1));
end
sum = sum + h1b(k+1);
hl1(k+1) = tanh(sum);
end
% Second hidden layer computation
hl2 = zeros(1, h2size);
for k = 0:h2size-1
hl2(k+1) = 0;
for i = 0:h1size-1
hl2(k+1) = hl2(k+1) + (h1h2w(k+1, i+1) * hl1(i+1));
end
hl2(k+1) = hl2(k+1) + h2b(k+1);
hl2(k+1) = tanh(hl2(k+1));
end
% Output layer computation
ol = zeros(1, outputsize);
for k = 0:outputsize-1
ol(k+1) = 0;
for i = 0:h2size-1
ol(k+1) = ol(k+1) + (h2ow(k+1, i+1) * hl2(i+1));
end
ol(k+1) = ol(k+1) + ob(k+1);
ol(k+1) = sigmoid(ol(k+1));
end
Ipred = TrainedNet(inputlayer);
this is code what i am implementing above neural network trained from inbuild function patternnet in matlab
I am using its weights and preprocess
but i am not getting same output in variable ol and Ipred patternnet MATLAB Answers — New Questions
problems in modeling a three-phase four-winding transformer YNyn0yn0+d5 with three “multi-winding transformer” in zero sequence component
I want to model a three-phase four-winding transformer YNyn0yn0+d5 with two low voltage windings (yn0yn0) and a compensation winding (d5) using Simscape Electrical. The transformer is a 5 limbs transformer. Since the transformer has four windings, I had to connect three single-phase transformers (model in Simulink: multi-winding transformer) in star and delta configurations respectively.
From the transformer test report, I have calculated the parameters of the T-equivalent circuit diagram. Here, I had to calculate the longitudinal impedances of the compensation winding using the measurement of zero sequence component, because it was not measured in the short-circuit test.
The simulated values of the open-circuit and short-circuit tests in the positive sequence component agree very well with the values from the transformer test report.
My problem:
In the measurement of zero sequence component, I only get matching values for the measurement that I used for the calculation of the compensation winding (HV supply, compensation winding short-circuited). In the further zero sequence measurements (additionally, one LV winding short-circuited), the short-circuit voltage is five times too high.
Questions:
Is there possibly a coupling in the transformer only in the zero sequence component?
Or does anyone already know this problem?
Or does anyone have an idea of how I can model the transformer using other Simulink models?I want to model a three-phase four-winding transformer YNyn0yn0+d5 with two low voltage windings (yn0yn0) and a compensation winding (d5) using Simscape Electrical. The transformer is a 5 limbs transformer. Since the transformer has four windings, I had to connect three single-phase transformers (model in Simulink: multi-winding transformer) in star and delta configurations respectively.
From the transformer test report, I have calculated the parameters of the T-equivalent circuit diagram. Here, I had to calculate the longitudinal impedances of the compensation winding using the measurement of zero sequence component, because it was not measured in the short-circuit test.
The simulated values of the open-circuit and short-circuit tests in the positive sequence component agree very well with the values from the transformer test report.
My problem:
In the measurement of zero sequence component, I only get matching values for the measurement that I used for the calculation of the compensation winding (HV supply, compensation winding short-circuited). In the further zero sequence measurements (additionally, one LV winding short-circuited), the short-circuit voltage is five times too high.
Questions:
Is there possibly a coupling in the transformer only in the zero sequence component?
Or does anyone already know this problem?
Or does anyone have an idea of how I can model the transformer using other Simulink models? I want to model a three-phase four-winding transformer YNyn0yn0+d5 with two low voltage windings (yn0yn0) and a compensation winding (d5) using Simscape Electrical. The transformer is a 5 limbs transformer. Since the transformer has four windings, I had to connect three single-phase transformers (model in Simulink: multi-winding transformer) in star and delta configurations respectively.
From the transformer test report, I have calculated the parameters of the T-equivalent circuit diagram. Here, I had to calculate the longitudinal impedances of the compensation winding using the measurement of zero sequence component, because it was not measured in the short-circuit test.
The simulated values of the open-circuit and short-circuit tests in the positive sequence component agree very well with the values from the transformer test report.
My problem:
In the measurement of zero sequence component, I only get matching values for the measurement that I used for the calculation of the compensation winding (HV supply, compensation winding short-circuited). In the further zero sequence measurements (additionally, one LV winding short-circuited), the short-circuit voltage is five times too high.
Questions:
Is there possibly a coupling in the transformer only in the zero sequence component?
Or does anyone already know this problem?
Or does anyone have an idea of how I can model the transformer using other Simulink models? multi-winding transformer, compensation winding, simscape electrical, transformer, transformer coupling, zero sequence MATLAB Answers — New Questions
Getting Error as IIS Restart Failure with Access Denied , You must be an Administrator
Team,
Getting above error whenever i am trying to Stop and Start the IIS reset where i am executing as Administrator User only
Team, Getting above error whenever i am trying to Stop and Start the IIS reset where i am executing as Administrator User only Read More
PID controller, difference when graphing step function with PID control block in matlab and simulink
Hi everyone,
please tell me, why is there a difference when graphing step function with PID control block in matlab and simulink
s =tf(‘s’);
g = 1.883e5/(s*(s^2+4466*s+6.43e6));
kp = 60;
ki = 63000;
kd = 3;
gpid = pid(kp,ki,kd);
gsys = feedback(g*gpid,1);
step(gsys)Hi everyone,
please tell me, why is there a difference when graphing step function with PID control block in matlab and simulink
s =tf(‘s’);
g = 1.883e5/(s*(s^2+4466*s+6.43e6));
kp = 60;
ki = 63000;
kd = 3;
gpid = pid(kp,ki,kd);
gsys = feedback(g*gpid,1);
step(gsys) Hi everyone,
please tell me, why is there a difference when graphing step function with PID control block in matlab and simulink
s =tf(‘s’);
g = 1.883e5/(s*(s^2+4466*s+6.43e6));
kp = 60;
ki = 63000;
kd = 3;
gpid = pid(kp,ki,kd);
gsys = feedback(g*gpid,1);
step(gsys) pid, graph MATLAB Answers — New Questions
How to unprotect Excel sheet if forgot the password
I recently encountered a problem and hope to get your help. I set a protection password for an Excel file before. Now I want to modify some data, but I found that I forgot Excel password. I wonder if there is any way to remove the protection or unprotect Excel sheet password? If anyone knows a related solution or has had a similar experience, please share it, thank you very much!
This file is very important to me, and there is a lot of work data in it. I have tried some methods found on the Internet, but none of them worked. It would be great if someone could provide some specific steps or recommend some tools.
I recently encountered a problem and hope to get your help. I set a protection password for an Excel file before. Now I want to modify some data, but I found that I forgot Excel password. I wonder if there is any way to remove the protection or unprotect Excel sheet password? If anyone knows a related solution or has had a similar experience, please share it, thank you very much! This file is very important to me, and there is a lot of work data in it. I have tried some methods found on the Internet, but none of them worked. It would be great if someone could provide some specific steps or recommend some tools. Read More
PostgreSQL with Local Small Language Model and In-Database Vectorization | Azure
Improve search capabilities for your PostgreSQL-backed applications using vector search and embeddings generated in under 10 milliseconds without sending data outside your PostgreSQL instance. Integrate real-time translation, sentiment analysis, and advanced AI functionalities securely within your database environment with Azure Local AI and Azure AI Service. Combine the Azure Local AI extension with the Azure AI extension to maximize the potential of AI-driven features in your applications, such as semantic search and real-time data translation, all while maintaining data security and efficiency.
Joshua Johnson, Principal Technical PM for Azure Database for PostgreSQL, demonstrates how you can reduce latency and ensure predictable performance by running locally deployed models, making it ideal for highly transactional applications.
Transform your PostgreSQL app’s performance.
Precise, relevant results for complex queries.
Enhance search accuracy with semantic search and vector embeddings. Start here.
Leverage Azure local AI and Azure AI Service together.
Watch our full video here:
QUICK LINKS:
00:00 — Improve search for PostgreSQL
01:21 — Increased speed
02:47 — Plain text descriptive query
03:20 — Improve search results
04:57 — Semantic search with vector embeddings
06:10 — Test it out
06:41 — Azure local AI extension with Azure AI Service
07:39 — Wrap up
Link References
Check out our previous episode on Azure AI extension at https://aka.ms/PGAIMechanics
Get started with Azure Database for PostgreSQL — Flexible Server at https://aka.ms/postgresql
To stay current with all the updates, check out our blog at https://aka.ms/azurepostgresblog
Unfamiliar with Microsoft Mechanics?
As Microsoft’s official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft.
Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries
Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog
Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/podcast
Keep getting this insider knowledge, join us on social:
Follow us on Twitter: https://twitter.com/MSFTMechanics
Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/
Enjoy us on Instagram: https://www.instagram.com/msftmechanics/
Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics
Video Transcript:
-You can now improve search for your Postgres-backed application and take advantage of vector search, generating vector embeddings in under 10 milliseconds, all without the need to send data outside of your Postgres instance.
-In fact, vectors are core to the generative AI experience where words and data are broken down into coordinate-like values, and the closer they are in value, the more similar they are in concept. As you submit prompts, those are broken down into vectors and used to search across data stored in the database to find semantic matches.
-And now in addition to the Azure AI extension that we showed before where we make a secure API call to the Azure OpenAI service and then transmit data via the Azure AI extension to generate vector embeddings with the ADA model, you now have a new option with the Azure local AI extension that lets you generate embeddings without your data ever leaving the server.
-This is made possible by running a purpose-built small language model called multilingual-E5-small developed by Microsoft Research. It runs on the Onyx Runtime inside the same virtual machine as your Postgres database to generate vector embeddings without transmitting your data outside of your Postgres server instance. The biggest benefit to using locally deployed models is the decrease in latency from calls to a model hosted remotely.
-It also makes timing and performance more predictable because you won’t hit query or token limits resulting in retries. Let me show you an example of how much this speeds things up. Here, I’ve created two SQL functions for local and remote embedding generation. Both are using the same data and the same dimension size of 384.
-And here, I have both ready to run side by side in pgbench. On the left, I’ll use the Azure AI extension to call the remote API to generate in return a vector embedding using the ADA 3 model running 600 transactions. On the right, I’ll use local in database embeddings to generate in return of vector embedding with the multilingual-E5 model running 800 transactions.
-I’ll start by kicking off the remote API SQL function on the left. Then I’ll run the local embedding model on the right. This is going to run for more than 30 seconds, but I’ll speed things up a little to save time. As you can see, generating the embeddings locally on the right reduced average latency from around 62 milliseconds to four milliseconds, and I ran the 800 transactions in about three seconds.
-On the left, I ran 200 less transactions than I did using the local embeddings. This took 37 seconds versus three seconds, so we were also able to process 242 transactions per second locally versus just 16 transactions per second with a remotely hosted model and API. That makes using the local AI extension around 15 times faster, which makes it a great option for highly transactional applications like e-commerce apps, ticketing systems, or even to run your chatbot knowledge bases and others.
-Let me show you how you can apply vector search and gen AI using our sample hotel booking app. First, I’ll show the experience without vectors, so you can see the true difference in experience. I’ll start with a plain text descriptive query to find properties in Seattle near the Space Needle with three bedrooms that allow pets.
-This is using a few built-in Postgres capabilities to drive some of the semantic meaning and keywords in the query with keyword and like clauses. You’ll see that using the full text search returns zero results, so it didn’t work for this complex prompt. So let me try another traditional approach. I’ll move to pgAdmin to see what happens if I manually extract the keywords.
-If we expand the query a bit by adding an OR clause with the keywords for the landmark we’re interested in seeing, Space Needle, we get over 100 results, but we can see from the descriptions that they’re not useful. For example, a few listings are way too small for three bedrooms, like a micro studio on Capitol Hill with 175 square feet and a Queen Anne condo with 800 square feet.
-So let’s try one more thing to see if we can get closer without using vectors. If you’ve been using Postgres for a while, you’ll know that it has two custom data types, tsvector, which sounds a lot like vector search, and tsquery to support full text search to match specific words or phrases within text data.
-I’ve added a column for tsvector in our listings table to document word or numbers in our descriptions, and here’s what that looks like. As you’ll see in the text search column, it’s converted longer strings into individual words segments, and I’ve added both GIN and GIST indexes for tsquery, which provides support for a combination of Boolean operators.
-These two modifications might allow for a natural language search like ours. That said, the best I can do is modify my query to just Space Needle and three bedrooms, and the result is better than zero, but it doesn’t have everything I need because I had to remove the pets allowed and proximity elements from my search.
-The search phrase is just too complicated here. To make this work, we would need to create additional indexes, assign weights to different parts of the input text for relevance ranking, and probably modify the search phrase itself to limit the number of key items that we’re looking for. So it would no longer be natural language search, so traditional methods fall short.
-But the good news is we can make all of this work using semantic search, with vector embeddings generated locally with an Azure Postgres thanks to the Azure local AI extension. Let me show you. First, we need to convert our text data from the descriptions, summaries, and listing names into vectors, which will create multiple numeric representations for each targeted field in the database. Here, I’m adding a vector column, local_description_vector, to specify that all embeddings generated for this column will use my locally deployed model.
-And this is what the output of one row looks like. As I scroll to the right, you’ll see these are all vector dimensions. Now, we also need to do the same for the incoming search prompts. Here, the number of embeddings generated per field or user prompt is defined by the number of dimensions the model requires to provide an accurate representation of that data. This is my query to convert the prompt into vectors, and this is an example of the generated vector embeddings for that prompt.
-And depending on the embedding model, these can generate hundreds or thousands of dimensions per entry to retrieve similarity matches. So our vectors are now ready. And next, we’ll add an index to speed up retrieval. I’ve added an HNSW vector index to build a multi-layered graph, and I’ve specified cosign similarity, which is best suited for text and document search.
-And now with everything running using vector search with our local AI extension, let’s test it out. I’ll use the same verbose query from before in our hotel booking site, properties in Seattle near the Space Needle with three bedrooms that allow pets, and you’ll see that there are a number of results returned that meet all of the criteria from the natural language text. I can see from each description that they are near or have a view of or are a short walk to the Space Needle. Each clearly has three or more bedrooms and all of them allow for pets with different conditions.
-Of course, the Azure local AI extension can still work in tandem with the broader Azure AI service via the Azure AI extension, which lets me hook into more advanced capabilities including language translation, vision, speech, and sentiment analysis, as well as generative AI chat with large language models in the Azure OpenAI service. In fact, let’s combine the power of both the Azure AI and Azure local AI extensions to perform realtime translations of listing summaries for the properties that meet our requirements.
-I’ve enabled the Azure AI extension, and in my SELECT statement, you’ll see that I’m using the Azure Cognitive Translate API for ES or Spanish for listing summaries. Now, I’ll go ahead and select all these lines and run it. I’ll adjust the view a little bit, and when I pull up the map view and open the geometry viewer, you’ll see that I have both the original English summary, and below that, the Spanish summary.
-And to see more of what’s possible with the Azure AI extension, check out our previous episode at aka.ms/pgaaimechanics. So that’s how you can improve search using vectors without sending data outside of your database to generate embeddings while dramatically reducing latency and still take advantage of the broader Azure AI service stack.
-To get started with Azure Database for PostgreSQL — Flexible Server, check out aka.ms/postgresql. And to stay current with all the updates, check out our blog at aka.ms/azurepostgresblog. Keep watching Microsoft Mechanics for the latest updates. Hit Subscribe and thank you for watching.
Microsoft Tech Community – Latest Blogs –Read More
FE Model with function handle
Hello everyone.
Is it possible to use a function handle within a femodel? So I could change the value of a material property or load, for example:
gm = multicuboid(0.5,0.1,0.1);
pdegplot(gm,FaceLabels="on",FaceAlpha=0.5);
% without function handles
model = femodel(AnalysisType="structuralStatic", …
Geometry=gm);
E = 210e3;
P = 1000;
nu = 0.3;
model.MaterialProperties = materialProperties(YoungsModulus=E,PoissonsRatio=nu);
model.FaceLoad(2) = faceLoad(Pressure=P);
model.FaceBC(5) = faceBC("Constraint","fixed");
model = generateMesh(model);
r = solve(model);
pdeplot3D(r.Mesh,"Deformation",r.Displacement,"ColorMap",r.VonMisesStress);
% using function handles
model = femodel(AnalysisType="structuralStatic", …
Geometry=gm);
E = @(x) x(1);
P = @(x) x(2);
model.MaterialProperties = materialProperties(YoungsModulus=E,PoissonsRatio=nu);
model.FaceLoad(2) = faceLoad(Pressure=P);
model.FaceBC(5) = faceBC("Constraint","fixed");
model = generateMesh(model);
values = [210e3 1000];
r = solve(model(values)); % I know it’s wrong, once variable model has only one element, but I want to replace YoungsModulus property by values(1) and faceLoad(2) by values(2)
pdeplot3D(r.Mesh,"Deformation",r.Displacement,"ColorMap",r.VonMisesStress);Hello everyone.
Is it possible to use a function handle within a femodel? So I could change the value of a material property or load, for example:
gm = multicuboid(0.5,0.1,0.1);
pdegplot(gm,FaceLabels="on",FaceAlpha=0.5);
% without function handles
model = femodel(AnalysisType="structuralStatic", …
Geometry=gm);
E = 210e3;
P = 1000;
nu = 0.3;
model.MaterialProperties = materialProperties(YoungsModulus=E,PoissonsRatio=nu);
model.FaceLoad(2) = faceLoad(Pressure=P);
model.FaceBC(5) = faceBC("Constraint","fixed");
model = generateMesh(model);
r = solve(model);
pdeplot3D(r.Mesh,"Deformation",r.Displacement,"ColorMap",r.VonMisesStress);
% using function handles
model = femodel(AnalysisType="structuralStatic", …
Geometry=gm);
E = @(x) x(1);
P = @(x) x(2);
model.MaterialProperties = materialProperties(YoungsModulus=E,PoissonsRatio=nu);
model.FaceLoad(2) = faceLoad(Pressure=P);
model.FaceBC(5) = faceBC("Constraint","fixed");
model = generateMesh(model);
values = [210e3 1000];
r = solve(model(values)); % I know it’s wrong, once variable model has only one element, but I want to replace YoungsModulus property by values(1) and faceLoad(2) by values(2)
pdeplot3D(r.Mesh,"Deformation",r.Displacement,"ColorMap",r.VonMisesStress); Hello everyone.
Is it possible to use a function handle within a femodel? So I could change the value of a material property or load, for example:
gm = multicuboid(0.5,0.1,0.1);
pdegplot(gm,FaceLabels="on",FaceAlpha=0.5);
% without function handles
model = femodel(AnalysisType="structuralStatic", …
Geometry=gm);
E = 210e3;
P = 1000;
nu = 0.3;
model.MaterialProperties = materialProperties(YoungsModulus=E,PoissonsRatio=nu);
model.FaceLoad(2) = faceLoad(Pressure=P);
model.FaceBC(5) = faceBC("Constraint","fixed");
model = generateMesh(model);
r = solve(model);
pdeplot3D(r.Mesh,"Deformation",r.Displacement,"ColorMap",r.VonMisesStress);
% using function handles
model = femodel(AnalysisType="structuralStatic", …
Geometry=gm);
E = @(x) x(1);
P = @(x) x(2);
model.MaterialProperties = materialProperties(YoungsModulus=E,PoissonsRatio=nu);
model.FaceLoad(2) = faceLoad(Pressure=P);
model.FaceBC(5) = faceBC("Constraint","fixed");
model = generateMesh(model);
values = [210e3 1000];
r = solve(model(values)); % I know it’s wrong, once variable model has only one element, but I want to replace YoungsModulus property by values(1) and faceLoad(2) by values(2)
pdeplot3D(r.Mesh,"Deformation",r.Displacement,"ColorMap",r.VonMisesStress); pde, femodel MATLAB Answers — New Questions
STM32H7xx DMA interrupts not working on UART receive
Requirement:
Receive every byte of data over uart, as I need to look for r and then process numbers before it, use interrupt driven code for better resource usage.
What I have tried:
Enable uart in cubeMX, add DMA request for uart in circular mode.
Then in simulink, I added a Hardware Interrupt Block and selected the DMA channel, which i set for uart, as interrupt source. I am checking only ‘TC’ event as interrupt source in hardware manager block.
Issue:
The code compiles and runs without error. But the triggered subsystem(function call) connected to Hardware Interrrupt Block never runs since my values counter never increments in the subsystem. I think that DMA is configured but it is not started properly by simulink to generate interrupts.
I have tried using Hardware interrupt block with External interrupts from button push, in that case, my interrupt driven counter increments. But when switch interrupt source to the DMA attached to uart RX, no interrupt occurs.
Question:
Has anybody any idea how can I generate interrupts from DMA block when it receives one word (4bytes) from UART and use the Hardware Interrupt block to call my triggered subsytem to process those bytes.
Thanks.Requirement:
Receive every byte of data over uart, as I need to look for r and then process numbers before it, use interrupt driven code for better resource usage.
What I have tried:
Enable uart in cubeMX, add DMA request for uart in circular mode.
Then in simulink, I added a Hardware Interrupt Block and selected the DMA channel, which i set for uart, as interrupt source. I am checking only ‘TC’ event as interrupt source in hardware manager block.
Issue:
The code compiles and runs without error. But the triggered subsystem(function call) connected to Hardware Interrrupt Block never runs since my values counter never increments in the subsystem. I think that DMA is configured but it is not started properly by simulink to generate interrupts.
I have tried using Hardware interrupt block with External interrupts from button push, in that case, my interrupt driven counter increments. But when switch interrupt source to the DMA attached to uart RX, no interrupt occurs.
Question:
Has anybody any idea how can I generate interrupts from DMA block when it receives one word (4bytes) from UART and use the Hardware Interrupt block to call my triggered subsytem to process those bytes.
Thanks. Requirement:
Receive every byte of data over uart, as I need to look for r and then process numbers before it, use interrupt driven code for better resource usage.
What I have tried:
Enable uart in cubeMX, add DMA request for uart in circular mode.
Then in simulink, I added a Hardware Interrupt Block and selected the DMA channel, which i set for uart, as interrupt source. I am checking only ‘TC’ event as interrupt source in hardware manager block.
Issue:
The code compiles and runs without error. But the triggered subsystem(function call) connected to Hardware Interrrupt Block never runs since my values counter never increments in the subsystem. I think that DMA is configured but it is not started properly by simulink to generate interrupts.
I have tried using Hardware interrupt block with External interrupts from button push, in that case, my interrupt driven counter increments. But when switch interrupt source to the DMA attached to uart RX, no interrupt occurs.
Question:
Has anybody any idea how can I generate interrupts from DMA block when it receives one word (4bytes) from UART and use the Hardware Interrupt block to call my triggered subsytem to process those bytes.
Thanks. stm32 dma, stm32 uart, stm32 simulink MATLAB Answers — New Questions
wrong motion of SCARA robot in dynamic
I have implemented a SCARA RRP robot in Simulink Matlab. The robot’s movement is carried out correctly using kinematics, but when I use the output data from kinematics as the input for dynamics, the robot performs a rotational and unproductive movement… Should I perform any specific calculations on my trajectories in the dynamics input?I have implemented a SCARA RRP robot in Simulink Matlab. The robot’s movement is carried out correctly using kinematics, but when I use the output data from kinematics as the input for dynamics, the robot performs a rotational and unproductive movement… Should I perform any specific calculations on my trajectories in the dynamics input? I have implemented a SCARA RRP robot in Simulink Matlab. The robot’s movement is carried out correctly using kinematics, but when I use the output data from kinematics as the input for dynamics, the robot performs a rotational and unproductive movement… Should I perform any specific calculations on my trajectories in the dynamics input? scara robot MATLAB Answers — New Questions
Data input and target formatting for Deep Learning Models
I am trying to train a ML model with data from 10 different trials in batches. Right now the input data is stored in a 1×9 cell array (Features) with each cell containing a 3x1x541 dlarray corresponding to (the 3 accelerometer channels C, Batch, and 541 time steps T) for all 10 trials. The other cell array that contains the correposding continous variable we are trying to predict/output values over the 541 time steps stored in (Predictionvalue). I am getting an error when inputing into my model that: Error using trainnet (line 46)
Dimension format of predictions and target values arguments must match.
Are there any suggestions on how I could fix this or if I am formatting my data inputs/tragets incorrectly?
Thank you so much in advance!I am trying to train a ML model with data from 10 different trials in batches. Right now the input data is stored in a 1×9 cell array (Features) with each cell containing a 3x1x541 dlarray corresponding to (the 3 accelerometer channels C, Batch, and 541 time steps T) for all 10 trials. The other cell array that contains the correposding continous variable we are trying to predict/output values over the 541 time steps stored in (Predictionvalue). I am getting an error when inputing into my model that: Error using trainnet (line 46)
Dimension format of predictions and target values arguments must match.
Are there any suggestions on how I could fix this or if I am formatting my data inputs/tragets incorrectly?
Thank you so much in advance! I am trying to train a ML model with data from 10 different trials in batches. Right now the input data is stored in a 1×9 cell array (Features) with each cell containing a 3x1x541 dlarray corresponding to (the 3 accelerometer channels C, Batch, and 541 time steps T) for all 10 trials. The other cell array that contains the correposding continous variable we are trying to predict/output values over the 541 time steps stored in (Predictionvalue). I am getting an error when inputing into my model that: Error using trainnet (line 46)
Dimension format of predictions and target values arguments must match.
Are there any suggestions on how I could fix this or if I am formatting my data inputs/tragets incorrectly?
Thank you so much in advance! data formatting, machine learning, dlarray, deep learning MATLAB Answers — New Questions
Update to Microsoft Desktop Virtualization API v. 2023-09-05 by August 2, 2024 to avoid any impact
Older Microsoft Desktop Virtualization API version(s) utilized for your Azure Virtual Desktop host pool resource will no longer support ‘get’ actions for registration token retrieval as of August 2nd, 2024.
The affected API versions are as follows:
2019-01-23-preview
2019-09-24-preview
2019-12-10-preview
2020-09-21-preview
2020-11-02-preview
2020-11-10-preview
2021-01-14-preview
On August 2nd, 2024, these affected API versions will no longer support the retrieval of the registration token. Users on older versions will not be able to use the ‘get’ action to retrieve the token. However, with the newer versions, a new ‘post’ action can be used to securely retrieve the token:
AZ CLI: az desktopvirtualization hostpool retrieve-registration-token – az desktopvirtualization hostpool | Microsoft Learn
REST: Host Pools – Retrieve Registration Token – REST API (Azure Desktop Virtualization) | Microsoft Learn
AZ PowerShell: Get-AzWvdHostPoolRegistrationToken (Az.DesktopVirtualization) | Microsoft Learn
Action Required
Review any workflows you may have that rely on readers retrieving access tokens and update them to extract the registration tokens for a host pool in a new way.
Ensure you are using up to date versions of the Microsoft Desktop Virtualization API.
To take action, here are examples of how to extract the registration tokens for a host pool and update to the 2023-09-05 API version using Bicep and ARM templates.
If you are using Bicep templates in your deployment:
retrieveToken.bicep – module used to retrieve the registration token from a host pool by using a patch operation:
@sys.description(‘Optional. Host Pool token validity length. Usage: ‘PT8H’ – valid for 8 hours; ‘P5D’ – valid for 5 days; ‘P1Y’ – valid for 1 year. When not provided, the token will be valid for 8 hours.’)
param tokenValidityLength string = ‘PT8H’
@sys.description(‘Generated. Do not provide a value! This date value is used to generate a registration token.’)
param baseTime string = utcNow(‘u’)
param vLocation string
param vHostPoolName string
param vHostPoolType string
param vPreferredAppGroupType string
param vMaxSessionLimit int
param vLoadBalancerType string
resource hostPool ‘Microsoft.DesktopVirtualization/hostPools@2023-09-05’ = {
name: vHostPoolName
location: vLocation
properties: {
hostPoolType: vHostPoolType
preferredAppGroupType: vPreferredAppGroupType
maxSessionLimit: vMaxSessionLimit
loadBalancerType: vLoadBalancerType
registrationInfo: {
expirationTime: dateTimeAdd(baseTime, tokenValidityLength)
registrationTokenOperation: ‘Update’
}
}
}
@sys.description(‘The registration token of the host pool.’)
output registrationToken string = reference(hostPool.id).registrationInfo.token
sample.bicep – example of usage of retrieveToken.bicep module to extract the registration token:
@sys.description(‘AVD Host Pool resource ID. (Default: )’)
param hostPoolResourceId string
var varHostpoolSubId = split(hostPoolResourceId, ‘/’)[2]
var varHostpoolRgName = split(hostPoolResourceId, ‘/’)[4]
var varHostPoolName = split(hostPoolResourceId, ‘/’)[8]
// Call on the hotspool
resource hostPoolGet ‘Microsoft.DesktopVirtualization/hostPools@2023-09-05’ existing = {
name: varHostPoolName
scope: resourceGroup(‘${varHostpoolSubId}’, ‘${varHostpoolRgName}’)
}
module hostPool ‘retrieveToken.bicep’ = {
name: varHostPoolName
scope: resourceGroup(‘${varHostpoolSubId}’, ‘${varHostpoolRgName}’)
params: {
vHostPoolName: varHostPoolName
vMaxSessionLimit: hostPoolGet.properties.maxSessionLimit
vPreferredAppGroupType: hostPoolGet.properties.preferredAppGroupType
vHostPoolType: hostPoolGet.properties.hostPoolType
vLoadBalancerType: hostPoolGet.properties.loadBalancerType
vLocation: hostPoolGet.location
}
}
@sys.description(‘The registration token of the host pool.’)
output registrationToken string = hostPool.outputs.registrationToken
If you are using ARM templates in your deployment:
{
“$schema”: “https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#”,
“contentVersion”: “1.0.0.0”,
“metadata”: {
“_generator”: {
“name”: “bicep”,
“version”: “0.28.1.47646”,
“templateHash”: “15215789985349638425”
}
},
“parameters”: {
“hostPoolName”: {
“type”: “string”
},
“location”: {
“type”: “string”
},
“baseTime”: {
“type”: “string”,
“defaultValue”: “[utcNow(‘u’)]”
}
},
“variables”: {
“expirationTime”: “[dateTimeAdd(parameters(‘baseTime’), ‘PT1H1M’)]”
},
“resources”: [
{
“type”: “Microsoft.DesktopVirtualization/hostPools”,
“apiVersion”: “2023-09-05”,
“name”: “[parameters(‘hostPoolName’)]”,
“location”: “[parameters(‘location’)]”,
“properties”: {
“maxSessionLimit”: 2,
“hostPoolType”: “Personal”,
“loadBalancerType”: “Persistent”,
“preferredAppGroupType”: “Desktop”,
“registrationInfo”: {
“expirationTime”: “[variables(‘expirationTime’)]”,
“registrationTokenOperation”: “Update”
}
}
}
],
“outputs”: {
“token”: {
“type”: “string”,
“value”: “[reference(resourceId(‘Microsoft.DesktopVirtualization/hostPools’, parameters(‘hostPoolName’))).registrationInfo.token]”
}
}
}
Additional Support
If you have any questions, comments, or concerns about this, please feel free to post a comment.
Older Microsoft Desktop Virtualization API version(s) utilized for your Azure Virtual Desktop host pool resource will no longer support ‘get’ actions for registration token retrieval as of August 2nd, 2024.
The affected API versions are as follows:
2019-01-23-preview
2019-09-24-preview
2019-12-10-preview
2020-09-21-preview
2020-11-02-preview
2020-11-10-preview
2021-01-14-preview
On August 2nd, 2024, these affected API versions will no longer support the retrieval of the registration token. Users on older versions will not be able to use the ‘get’ action to retrieve the token. However, with the newer versions, a new ‘post’ action can be used to securely retrieve the token:
AZ CLI: az desktopvirtualization hostpool retrieve-registration-token – az desktopvirtualization hostpool | Microsoft Learn
REST: Host Pools – Retrieve Registration Token – REST API (Azure Desktop Virtualization) | Microsoft Learn
AZ PowerShell: Get-AzWvdHostPoolRegistrationToken (Az.DesktopVirtualization) | Microsoft Learn
Action Required
Review any workflows you may have that rely on readers retrieving access tokens and update them to extract the registration tokens for a host pool in a new way.
Ensure you are using up to date versions of the Microsoft Desktop Virtualization API.
To take action, here are examples of how to extract the registration tokens for a host pool and update to the 2023-09-05 API version using Bicep and ARM templates.
If you are using Bicep templates in your deployment:
retrieveToken.bicep – module used to retrieve the registration token from a host pool by using a patch operation:
@sys.description(‘Optional. Host Pool token validity length. Usage: ‘PT8H’ – valid for 8 hours; ‘P5D’ – valid for 5 days; ‘P1Y’ – valid for 1 year. When not provided, the token will be valid for 8 hours.’)
param tokenValidityLength string = ‘PT8H’
@sys.description(‘Generated. Do not provide a value! This date value is used to generate a registration token.’)
param baseTime string = utcNow(‘u’)
param vLocation string
param vHostPoolName string
param vHostPoolType string
param vPreferredAppGroupType string
param vMaxSessionLimit int
param vLoadBalancerType string
resource hostPool ‘Microsoft.DesktopVirtualization/hostPools@2023-09-05’ = {
name: vHostPoolName
location: vLocation
properties: {
hostPoolType: vHostPoolType
preferredAppGroupType: vPreferredAppGroupType
maxSessionLimit: vMaxSessionLimit
loadBalancerType: vLoadBalancerType
registrationInfo: {
expirationTime: dateTimeAdd(baseTime, tokenValidityLength)
registrationTokenOperation: ‘Update’
}
}
}
@sys.description(‘The registration token of the host pool.’)
output registrationToken string = reference(hostPool.id).registrationInfo.token
sample.bicep – example of usage of retrieveToken.bicep module to extract the registration token:
@sys.description(‘AVD Host Pool resource ID. (Default: )’)
param hostPoolResourceId string
var varHostpoolSubId = split(hostPoolResourceId, ‘/’)[2]
var varHostpoolRgName = split(hostPoolResourceId, ‘/’)[4]
var varHostPoolName = split(hostPoolResourceId, ‘/’)[8]
// Call on the hotspool
resource hostPoolGet ‘Microsoft.DesktopVirtualization/hostPools@2023-09-05’ existing = {
name: varHostPoolName
scope: resourceGroup(‘${varHostpoolSubId}’, ‘${varHostpoolRgName}’)
}
module hostPool ‘retrieveToken.bicep’ = {
name: varHostPoolName
scope: resourceGroup(‘${varHostpoolSubId}’, ‘${varHostpoolRgName}’)
params: {
vHostPoolName: varHostPoolName
vMaxSessionLimit: hostPoolGet.properties.maxSessionLimit
vPreferredAppGroupType: hostPoolGet.properties.preferredAppGroupType
vHostPoolType: hostPoolGet.properties.hostPoolType
vLoadBalancerType: hostPoolGet.properties.loadBalancerType
vLocation: hostPoolGet.location
}
}
@sys.description(‘The registration token of the host pool.’)
output registrationToken string = hostPool.outputs.registrationToken
If you are using ARM templates in your deployment:
{
“$schema”: “https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#”,
“contentVersion”: “1.0.0.0”,
“metadata”: {
“_generator”: {
“name”: “bicep”,
“version”: “0.28.1.47646”,
“templateHash”: “15215789985349638425”
}
},
“parameters”: {
“hostPoolName”: {
“type”: “string”
},
“location”: {
“type”: “string”
},
“baseTime”: {
“type”: “string”,
“defaultValue”: “[utcNow(‘u’)]”
}
},
“variables”: {
“expirationTime”: “[dateTimeAdd(parameters(‘baseTime’), ‘PT1H1M’)]”
},
“resources”: [
{
“type”: “Microsoft.DesktopVirtualization/hostPools”,
“apiVersion”: “2023-09-05”,
“name”: “[parameters(‘hostPoolName’)]”,
“location”: “[parameters(‘location’)]”,
“properties”: {
“maxSessionLimit”: 2,
“hostPoolType”: “Personal”,
“loadBalancerType”: “Persistent”,
“preferredAppGroupType”: “Desktop”,
“registrationInfo”: {
“expirationTime”: “[variables(‘expirationTime’)]”,
“registrationTokenOperation”: “Update”
}
}
}
],
“outputs”: {
“token”: {
“type”: “string”,
“value”: “[reference(resourceId(‘Microsoft.DesktopVirtualization/hostPools’, parameters(‘hostPoolName’))).registrationInfo.token]”
}
}
}
Additional Support
If you have any questions, comments, or concerns about this, please feel free to post a comment. Read More