Month: July 2024
how to run simulink simulation from matlab script
hello i need to run simulink simulation from matlab, how to do it using matlab command in script ?hello i need to run simulink simulation from matlab, how to do it using matlab command in script ? hello i need to run simulink simulation from matlab, how to do it using matlab command in script ? simulink, run function MATLAB Answers — New Questions
What to do I’m Getting QuickBook𝙨 error qbwc1039 after Update
The error appeared every time I attempted to connect a third-party application with QuickBook𝙨 using the QuickBook𝙨Web Connector (QBWC). The exact message read: “Error QBWC1039: There was a problem adding the application. Check QWCLog.txt for details.”
The error appeared every time I attempted to connect a third-party application with QuickBook𝙨 using the QuickBook𝙨Web Connector (QBWC). The exact message read: “Error QBWC1039: There was a problem adding the application. Check QWCLog.txt for details.” Read More
Need help with Sharing files, folders or items
Hello
Please i need your help on this issue.
How to share a file an external user who uses Gmail account on OneDrive.
We followed this steps but still the user cannot open the files
Here are the steps:
1.Go to OneDrive and sign in with your Microsoft account.
2.Navigate to the file or folder you want to share and select it.
3.Click the “Share” button.
4. In the “Send link” dialog box, click the down arrow to change the type of link.
5. Choose “Anyone with the link” if you want the link to be accessible to anyone who has it.
But When the Gmail user try to open the files it says they need to login to their Microsoft account.
But when we use drobox the user is able to open the files.
Hello Please i need your help on this issue. How to share a file an external user who uses Gmail account on OneDrive. We followed this steps but still the user cannot open the files Here are the steps:1.Go to OneDrive and sign in with your Microsoft account.2.Navigate to the file or folder you want to share and select it.3.Click the “Share” button.4. In the “Send link” dialog box, click the down arrow to change the type of link.5. Choose “Anyone with the link” if you want the link to be accessible to anyone who has it. But When the Gmail user try to open the files it says they need to login to their Microsoft account. But when we use drobox the user is able to open the files. Read More
Automation of load testing (JMeter script) using Azure DevOps release Pipeline (Continuous testing)
Content
Objective
Create Repos and add JMeter script
Create a service connection
Create a key vault
Create a Variable Group in Azure DevOps
Steps to Setup Release Pipeline
Objective
We are using JMeter to test how an application handles concurrent users. We want to run this JMeter script automatically at regular intervals without human involvement. When we use Azure Pipelines for Continuous testing, there is a reduction in the risk of errors that can happen when a human performs a process repetitively. Also, this helps us to detect issues sooner by running tests automatically.
Diagram
Scenario
Credentials should be stored in the Azure Key Vault and that is where it should be retrieved each time the load testing runs. In this case, there is access token for the API calls, which should be generated every 24 hours.
Create Repos and add JMeter script
Upload the JMeter script that has worked in GUI mode. Curl command to generate the access token is embedded in the script. The client id and client secret should not be hard coded in the script and are substituted with a variable name. Inside the script, the portion where client id/secret is assigned is replaced by a function with variable name (substitute of client id/secret) as its parameter.
Create Service Connection
To connect Azure Pipelines to Azure Key Vault (in Azure Portal), we create service connection and specify the service principal. “Service Connection” is a configuration that securely stores information required to connect and authenticate to external services or resources. The service principal specifies the resources and the access levels that are available over the connection.
Create a key vault and Grant Azure DevOps Access to Key Vault
In Azure Portal, create a key vault by providing a name, subscription, resource group and location for the vault. To access the data from the vault, provide permissions to the above service principal for authentication in the pipeline. Once the service is provisioned, select the key vault and add the new secret.
Create a Variable Group in Azure DevOps
To integrate an Azure Key Vault with your Azure DevOps pipeline using a service account, and create a variable group for secrets, follow these steps:
In Azure DevOps, go to ‘Pipelines’ > ‘Library’.
Click ‘+ Variable Group’. Name your variable group.
Link to an Azure Key Vault and select your subscription and the Key Vault.
Select ‘Authorize’ to connect Azure DevOps to your Key Vault.
Choose the secrets from your Key Vault to add to the variable group.
Steps to Setup Release Pipeline
To create the release pipeline, go to Releases page from the left menu of Azure DevOps then click on New pipeline button.
Select “Azure Repos Git” as source, Team project, Repository & Branch details as required and click Continue. After that select “Empty Job” as template. Specify Pipeline Name & Agent Pool.
Add an artifact & connect Azure repository – Empty Job.
When release creation is opened, Azure automatically adds a stage. Add tasks in the stage
Add JMeter – Install JMeter.
Add CommandLine. (CommandLine should have instructions to run the selected JMeter file, link the ClientID/secret secured in Azure Key Vault to the script and store the results of the load test in downloadable option.)
Add Upload Release Artifact
Save pipeline. Configure schedules on pipelines. Scheduling can be done using the schedule option near the artifact.
Create pipeline release and execute
Download logs and check the report.
JMeter script (In the above figure, this refers to place marked as “2”)
Bearer ${AuthAccessToken}
client_id=${__P(variable1,)}
client_secret=${__P(variable2,)}
Azure DevOps release Pipeline-> Stages -> Tasks -> CommandLine Script (In the above figure, this refers to place marked as “4”)
jmeter -Jvariable1=$(ABCD_ClientId) -Jvariable2=$(ABCD_ClientSecret) -n -t
$(System.DefaultWorkingDirectory)/path_to_dir/loadTestscript.jmx -l
$(System.DefaultWorkingDirectory)/path_to_dir /LoadTestResults.jtl -e -o
$(System.DefaultWorkingDirectory)/path_to_dir/LoadTestReports
Problem:
Ours is a special scenario where the secrets are to be stored in AzureKeyVault instead of ADO variables.
While there is an online solution for automating JMeter load test in AzureDevOps, solution for the above specific scenario is not present over online. Online sources suggest using ApacheGroovy functions inside JMeter script for encryption when using ADO variable for secret storage. This did not work in our Windows environment plus we need credentials to be stored in Azure Key Vault.
Why do we insist on Azure Key Vault? Azure Key Vault offers better security and easy access management, and ADO variable groups offer the benefit of reusability of variable and easy maintenance.
Solution:
This solution was done by a combination of the method to call the variables (to hold secret) in 2 places:
inside JMeter script
Azure DevOps release Pipeline-> Stages -> Tasks -> CommandLine Script
Content
Objective
Create Repos and add JMeter script
Create a service connection
Create a key vault
Create a Variable Group in Azure DevOps
Steps to Setup Release Pipeline
Objective
We are using JMeter to test how an application handles concurrent users. We want to run this JMeter script automatically at regular intervals without human involvement. When we use Azure Pipelines for Continuous testing, there is a reduction in the risk of errors that can happen when a human performs a process repetitively. Also, this helps us to detect issues sooner by running tests automatically.
Diagram
Scenario
Credentials should be stored in the Azure Key Vault and that is where it should be retrieved each time the load testing runs. In this case, there is access token for the API calls, which should be generated every 24 hours.
Create Repos and add JMeter script
Upload the JMeter script that has worked in GUI mode. Curl command to generate the access token is embedded in the script. The client id and client secret should not be hard coded in the script and are substituted with a variable name. Inside the script, the portion where client id/secret is assigned is replaced by a function with variable name (substitute of client id/secret) as its parameter.
Create Service Connection
To connect Azure Pipelines to Azure Key Vault (in Azure Portal), we create service connection and specify the service principal. “Service Connection” is a configuration that securely stores information required to connect and authenticate to external services or resources. The service principal specifies the resources and the access levels that are available over the connection.
Create a key vault and Grant Azure DevOps Access to Key Vault
In Azure Portal, create a key vault by providing a name, subscription, resource group and location for the vault. To access the data from the vault, provide permissions to the above service principal for authentication in the pipeline. Once the service is provisioned, select the key vault and add the new secret.
Create a Variable Group in Azure DevOps
To integrate an Azure Key Vault with your Azure DevOps pipeline using a service account, and create a variable group for secrets, follow these steps:
In Azure DevOps, go to ‘Pipelines’ > ‘Library’.
Click ‘+ Variable Group’. Name your variable group.
Link to an Azure Key Vault and select your subscription and the Key Vault.
Select ‘Authorize’ to connect Azure DevOps to your Key Vault.
Choose the secrets from your Key Vault to add to the variable group.
Steps to Setup Release Pipeline
To create the release pipeline, go to Releases page from the left menu of Azure DevOps then click on New pipeline button.
Select “Azure Repos Git” as source, Team project, Repository & Branch details as required and click Continue. After that select “Empty Job” as template. Specify Pipeline Name & Agent Pool.
Add an artifact & connect Azure repository – Empty Job.
When release creation is opened, Azure automatically adds a stage. Add tasks in the stage
Add JMeter – Install JMeter.
Add CommandLine. (CommandLine should have instructions to run the selected JMeter file, link the ClientID/secret secured in Azure Key Vault to the script and store the results of the load test in downloadable option.)
Add Upload Release Artifact
Save pipeline. Configure schedules on pipelines. Scheduling can be done using the schedule option near the artifact.
Create pipeline release and execute
Download logs and check the report.
JMeter script (In the above figure, this refers to place marked as “2”)
Bearer ${AuthAccessToken}
client_id=${__P(variable1,)}
client_secret=${__P(variable2,)}
Azure DevOps release Pipeline-> Stages -> Tasks -> CommandLine Script (In the above figure, this refers to place marked as “4”)
jmeter -Jvariable1=$(ABCD_ClientId) -Jvariable2=$(ABCD_ClientSecret) -n -t
$(System.DefaultWorkingDirectory)/path_to_dir/loadTestscript.jmx -l
$(System.DefaultWorkingDirectory)/path_to_dir /LoadTestResults.jtl -e -o
$(System.DefaultWorkingDirectory)/path_to_dir/LoadTestReports
Problem:
Ours is a special scenario where the secrets are to be stored in AzureKeyVault instead of ADO variables.
While there is an online solution for automating JMeter load test in AzureDevOps, solution for the above specific scenario is not present over online. Online sources suggest using ApacheGroovy functions inside JMeter script for encryption when using ADO variable for secret storage. This did not work in our Windows environment plus we need credentials to be stored in Azure Key Vault.
Why do we insist on Azure Key Vault? Azure Key Vault offers better security and easy access management, and ADO variable groups offer the benefit of reusability of variable and easy maintenance.
Solution:
This solution was done by a combination of the method to call the variables (to hold secret) in 2 places:
inside JMeter script
Azure DevOps release Pipeline-> Stages -> Tasks -> CommandLine Script Read More
Understanding Row Groups
I am very new to SSRS. I have inherited some reports. They have existing row groups which give the output of, where do I find the definitions of the current groups (System and Sample Date/Time)? I’ve looked at the Details_Group and see nothing defined. I am wanting to add the grouping to Unit first, then System, then Sample Date/Time.
I am very new to SSRS. I have inherited some reports. They have existing row groups which give the output of, where do I find the definitions of the current groups (System and Sample Date/Time)? I’ve looked at the Details_Group and see nothing defined. I am wanting to add the grouping to Unit first, then System, then Sample Date/Time. Read More
no se instala una actualizacion
hola buenas tardes, tengo inconvenientes con la actualizacion Cumulative Update for Windows 11 Insider Preview (10.0.26120.1252) (KB5038603).
queda cargando y no se descarga ni se instala
hola buenas tardes, tengo inconvenientes con la actualizacion Cumulative Update for Windows 11 Insider Preview (10.0.26120.1252) (KB5038603). queda cargando y no se descarga ni se instala Read More
QuickBook𝙨 File Doctor Won’t Open – Need Help!
I’ve been trying to open QuickBook𝙨 File Doctor to fix some company file issues, but every time I try to launch the application, it won’t open. Has anyone else encountered this issue with QuickBook𝙨 File Doctor not opening? If so, what steps did you take to resolve it? Any advice or suggestions would be greatly appreciated.
I’ve been trying to open QuickBook𝙨 File Doctor to fix some company file issues, but every time I try to launch the application, it won’t open. Has anyone else encountered this issue with QuickBook𝙨 File Doctor not opening? If so, what steps did you take to resolve it? Any advice or suggestions would be greatly appreciated. Read More
How do I use objects with inheritance structures in a `parfor` loop in Matlab?
I am trying to parallelize a simulation that uses a custom library. Within the ‘parfor’ loop I am using, I receive the following error:
Error using sensor/readdata
Unrecognized field name "sensorDevice".
The object is an instance of Sensor that inherits its .sensorDevice field from its parent class. The method readdata calls on the inherited field. This works fine in a for loop. The transparency requirements for variable definitions and the corresponding documentation emphasize the ability of all variables to be machine-readable, but I do not follow why the parfor loop cannot trace the pointers to the parent classes’ fields and methods. Re-writing the class in a flattened structure is not an option due to the time that would be required to do so.
How do I use methods and fields from a parent class in an instance of a child class in a parfor loop?I am trying to parallelize a simulation that uses a custom library. Within the ‘parfor’ loop I am using, I receive the following error:
Error using sensor/readdata
Unrecognized field name "sensorDevice".
The object is an instance of Sensor that inherits its .sensorDevice field from its parent class. The method readdata calls on the inherited field. This works fine in a for loop. The transparency requirements for variable definitions and the corresponding documentation emphasize the ability of all variables to be machine-readable, but I do not follow why the parfor loop cannot trace the pointers to the parent classes’ fields and methods. Re-writing the class in a flattened structure is not an option due to the time that would be required to do so.
How do I use methods and fields from a parent class in an instance of a child class in a parfor loop? I am trying to parallelize a simulation that uses a custom library. Within the ‘parfor’ loop I am using, I receive the following error:
Error using sensor/readdata
Unrecognized field name "sensorDevice".
The object is an instance of Sensor that inherits its .sensorDevice field from its parent class. The method readdata calls on the inherited field. This works fine in a for loop. The transparency requirements for variable definitions and the corresponding documentation emphasize the ability of all variables to be machine-readable, but I do not follow why the parfor loop cannot trace the pointers to the parent classes’ fields and methods. Re-writing the class in a flattened structure is not an option due to the time that would be required to do so.
How do I use methods and fields from a parent class in an instance of a child class in a parfor loop? parfor, parallel computing MATLAB Answers — New Questions
Run Simulink Model via parsim() and save after each simulation
I am trying to run my simulink model in parallel while saving the results to a matfile after each simulation has completed.
Each worker should take the following action:
Simulate model with simIn( j )
Create matfile for above simulation
Repeate with simIn ( K ) until all simulations are complete.
The below code is my starting point. "test_model" is a simulink model of only a sin wave going into a gain, integrator, and to workspace block simply to test out parsim().
%% Example of how to run simulink models in parallel
clear; close all; clc;
mdl = "test_model";
amp= 1:10;
freq = 1:10;
nSims = length(amp);
simIn(1:nSims) = Simulink.SimulationInput(mdl);
% Setup model workspace
k = 5;
for i =1:nSims
nm = [num2str(i) ‘.mat’];
simIn(i) = setVariable(simIn(i), "amp", amp(i), ‘Workspace’, mdl);
simIn(i) = setVariable(simIn(i), "freq", freq(i), ‘Workspace’, mdl);
simIn(i) = setPostSimFcn(simIn(i), @(x)postSim(x, simIn(i), nm));
end
out = parsim (simIn, ‘UseFastRestart’,’on’, ‘TransferBaseWorkspaceVariables’,’on’);
function postSim(out, in, nm)
% postSim – Run @ completion of each simulation
% Saves simulation input, output, and name to file "nm"
m = matfile(nm, ‘Writable’, true);
m.out = out;
m.in = in;
m.nm = nm;
end
I understand that save() does not work as expected in parallel, so I’ve implemented the example from the question below as a method of creating my mat files. The above code works exactly as expected if "parsim" is replaced with "sim"; however, no files are created when using "parsim".
https://www.mathworks.com/matlabcentral/answers/135285-how-do-i-use-save-with-a-parfor-loop-using-parallel-computing-toolbox
Note: setting ‘TransferBaseWorkspaceVariables’ to ‘off’ and setting k via setVariable does not have any impact on this behavior.
Ask: How can I modify the above code to also work with the parsim command?I am trying to run my simulink model in parallel while saving the results to a matfile after each simulation has completed.
Each worker should take the following action:
Simulate model with simIn( j )
Create matfile for above simulation
Repeate with simIn ( K ) until all simulations are complete.
The below code is my starting point. "test_model" is a simulink model of only a sin wave going into a gain, integrator, and to workspace block simply to test out parsim().
%% Example of how to run simulink models in parallel
clear; close all; clc;
mdl = "test_model";
amp= 1:10;
freq = 1:10;
nSims = length(amp);
simIn(1:nSims) = Simulink.SimulationInput(mdl);
% Setup model workspace
k = 5;
for i =1:nSims
nm = [num2str(i) ‘.mat’];
simIn(i) = setVariable(simIn(i), "amp", amp(i), ‘Workspace’, mdl);
simIn(i) = setVariable(simIn(i), "freq", freq(i), ‘Workspace’, mdl);
simIn(i) = setPostSimFcn(simIn(i), @(x)postSim(x, simIn(i), nm));
end
out = parsim (simIn, ‘UseFastRestart’,’on’, ‘TransferBaseWorkspaceVariables’,’on’);
function postSim(out, in, nm)
% postSim – Run @ completion of each simulation
% Saves simulation input, output, and name to file "nm"
m = matfile(nm, ‘Writable’, true);
m.out = out;
m.in = in;
m.nm = nm;
end
I understand that save() does not work as expected in parallel, so I’ve implemented the example from the question below as a method of creating my mat files. The above code works exactly as expected if "parsim" is replaced with "sim"; however, no files are created when using "parsim".
https://www.mathworks.com/matlabcentral/answers/135285-how-do-i-use-save-with-a-parfor-loop-using-parallel-computing-toolbox
Note: setting ‘TransferBaseWorkspaceVariables’ to ‘off’ and setting k via setVariable does not have any impact on this behavior.
Ask: How can I modify the above code to also work with the parsim command? I am trying to run my simulink model in parallel while saving the results to a matfile after each simulation has completed.
Each worker should take the following action:
Simulate model with simIn( j )
Create matfile for above simulation
Repeate with simIn ( K ) until all simulations are complete.
The below code is my starting point. "test_model" is a simulink model of only a sin wave going into a gain, integrator, and to workspace block simply to test out parsim().
%% Example of how to run simulink models in parallel
clear; close all; clc;
mdl = "test_model";
amp= 1:10;
freq = 1:10;
nSims = length(amp);
simIn(1:nSims) = Simulink.SimulationInput(mdl);
% Setup model workspace
k = 5;
for i =1:nSims
nm = [num2str(i) ‘.mat’];
simIn(i) = setVariable(simIn(i), "amp", amp(i), ‘Workspace’, mdl);
simIn(i) = setVariable(simIn(i), "freq", freq(i), ‘Workspace’, mdl);
simIn(i) = setPostSimFcn(simIn(i), @(x)postSim(x, simIn(i), nm));
end
out = parsim (simIn, ‘UseFastRestart’,’on’, ‘TransferBaseWorkspaceVariables’,’on’);
function postSim(out, in, nm)
% postSim – Run @ completion of each simulation
% Saves simulation input, output, and name to file "nm"
m = matfile(nm, ‘Writable’, true);
m.out = out;
m.in = in;
m.nm = nm;
end
I understand that save() does not work as expected in parallel, so I’ve implemented the example from the question below as a method of creating my mat files. The above code works exactly as expected if "parsim" is replaced with "sim"; however, no files are created when using "parsim".
https://www.mathworks.com/matlabcentral/answers/135285-how-do-i-use-save-with-a-parfor-loop-using-parallel-computing-toolbox
Note: setting ‘TransferBaseWorkspaceVariables’ to ‘off’ and setting k via setVariable does not have any impact on this behavior.
Ask: How can I modify the above code to also work with the parsim command? simulink, parallel computing toolbox MATLAB Answers — New Questions
Controlled Voltage Source (Three-Phase)
After the three-phase voltage signal passes through the Controlled Voltage Source (Three-Phase), the amplitude becomes root three times the original value, and the phase also changes. But according to the module description:
Instantaneous — The output voltages, [va vb vc], are equal to the values of the input port S.
The signal input and output should be same..
Could anyone help me this question? Many thanks.After the three-phase voltage signal passes through the Controlled Voltage Source (Three-Phase), the amplitude becomes root three times the original value, and the phase also changes. But according to the module description:
Instantaneous — The output voltages, [va vb vc], are equal to the values of the input port S.
The signal input and output should be same..
Could anyone help me this question? Many thanks. After the three-phase voltage signal passes through the Controlled Voltage Source (Three-Phase), the amplitude becomes root three times the original value, and the phase also changes. But according to the module description:
Instantaneous — The output voltages, [va vb vc], are equal to the values of the input port S.
The signal input and output should be same..
Could anyone help me this question? Many thanks. controlled voltage source (three-phase) MATLAB Answers — New Questions
A struct in the workspace window consists of n (say 1000 or more) different variable names. How to quickly look for a variable name without any code?
I am having a struct data type that consists of 2000 different names of places with their avg temperatures . When I open the struct variable I get a huge list. How can I quickly locate the variable and read its values without scrolling down to every variable. I don’t see any find option in the window. One may say I can code it. Yes, I can do that but I am looking for alternative quick ways so that I don’t need to write the code to find it.I am having a struct data type that consists of 2000 different names of places with their avg temperatures . When I open the struct variable I get a huge list. How can I quickly locate the variable and read its values without scrolling down to every variable. I don’t see any find option in the window. One may say I can code it. Yes, I can do that but I am looking for alternative quick ways so that I don’t need to write the code to find it. I am having a struct data type that consists of 2000 different names of places with their avg temperatures . When I open the struct variable I get a huge list. How can I quickly locate the variable and read its values without scrolling down to every variable. I don’t see any find option in the window. One may say I can code it. Yes, I can do that but I am looking for alternative quick ways so that I don’t need to write the code to find it. quick variable search in struct variable window MATLAB Answers — New Questions
Custom fields in Email confirmation
Hello,
I would like to include one of the custom fields in the email confirmation. At this point I dont think it is possible in bookings, unless I am terribly missing something obvious.
Thanks!
Hello,I would like to include one of the custom fields in the email confirmation. At this point I dont think it is possible in bookings, unless I am terribly missing something obvious. Thanks! Read More
Excel not sorting numbers correctly
I have what I hope is a simple problem – all that I need is for excel to sort numbers numerically. Instead, it is sorting them 1, 10, 11 … 19, 2, 20, etc.
I’ve tried pasting my raw data as values, I’ve tried converting to number, I’ve tried =CLEAN, =TRIM, =SUBSTITUTE, I’ve tried pasting just the numbers as values, none of it is working. When I do =ISNUMBER, it keeps coming back false no matter what I try.
My function for Stop (clean) is =CLEAN(TRIM(SUBSTITUTE(C2,CHAR(160),””)))
I have what I hope is a simple problem – all that I need is for excel to sort numbers numerically. Instead, it is sorting them 1, 10, 11 … 19, 2, 20, etc.I’ve tried pasting my raw data as values, I’ve tried converting to number, I’ve tried =CLEAN, =TRIM, =SUBSTITUTE, I’ve tried pasting just the numbers as values, none of it is working. When I do =ISNUMBER, it keeps coming back false no matter what I try.My function for Stop (clean) is =CLEAN(TRIM(SUBSTITUTE(C2,CHAR(160),””))) Read More
Inactivating some of the builtin Microsoft Sensitive Information Types / SITs?
In Purview’s CONTENT EXPLORER we see all 300+ built-in Microsoft SITs being discovered – about 2/3 of which aren’t relevant to my org (i.e. identification numbers, social welfare numbers, passport numbers, revenue numbers, etc. from other countries – PNG attached with a few of them highlighted).
Is there any way to inactivate or NOT search for/identify these irrelevant SITs?
In Purview’s CONTENT EXPLORER we see all 300+ built-in Microsoft SITs being discovered – about 2/3 of which aren’t relevant to my org (i.e. identification numbers, social welfare numbers, passport numbers, revenue numbers, etc. from other countries – PNG attached with a few of them highlighted).Is there any way to inactivate or NOT search for/identify these irrelevant SITs? Read More
Shared / Shift Mobile Phone
Hi All
I hope you are well.
Anyway, is it possible to deploy a config that allows for a Mobile Phone to be shared amongst shift users?
Does Entra Shared Device Mode cover this scenario?
Info appreciated.
Stuart
Hi All I hope you are well. Anyway, is it possible to deploy a config that allows for a Mobile Phone to be shared amongst shift users? Does Entra Shared Device Mode cover this scenario? Info appreciated. Stuart Read More
Teams calendar help
Hello
Please i need your help on this issue.
Need to add an event to a teams calendar without sending notifications to all members
Hello Please i need your help on this issue. Need to add an event to a teams calendar without sending notifications to all members Read More
Generally Available: Transition to WS2012 / R2 ESUs enabled by Azure Arc from Volume Licensing
Customers that have enrolled in WS2012/ R2 ESUs through Volume Licensing for Year 1 can transition to Azure Arc for Year 2 of the program. Extended Security Updates afford customers with critical security patches for end of support Windows Server 2012/R2 machines. ESUs are available at no additional cost to customers running on Azure VMs or Azure Stack HCI. For customers running on-premises or other public clouds, WS2012/R2 ESUs are a paid offer.
The WS2012/R2 ESUs enabled by Azure Arc affords key advantages compared to ESUs through Volume Licensing including:
Pay as you Go flexibility to scale down WS2012 ESU consumption consumptively as customers migrate workloads to Azure and modernize affording flexibility.
Ability to apply Azure Discounting with the decrement of a Microsoft Azure Consumption Commitment (MACC) affording significant financial benefits.
Azure management capabilities including Azure Update Manager, Azure Change Tracking and Inventory, and Azure Machine Configuration are available at no additional cost.
For enrollment, customers must specify their Year 1 Volume Licensing entitlement, indicating their Invoice Id (Invoice Number) to reflect their MAK Key entitlement at the time of Azure Arc WS2012 / R2 ESU license provisioning. This is available directly in the Azure portal with the current experience for Create an Extended Security Updates license. Programmatically, customers can use Azure CLI to generate new licenses, specifying the new Volume License Details parameter in az connectedmachine license | Microsoft Learn their Year 1 Volume Licensing entitlements by entering their respective Invoice Numbers. Customers must explicitly specify the Invoice Id (Number) in their license provisioning for Azure Arc.
Customers that make this indication in their license creation will not be back billed for Year 1 of Extended Security Updates, with billing to commence from the start of Year 2 of Windows Server 2012/R2 Extended Security Updates. Customers do not need to deactivate existing MAK Keys or unenroll from ESUs through Volume Licensing to enroll in WS2012/R2 ESUs enabled by Azure Arc. See enrollment steps at Deliver Extended Security Updates for Windows Server 2012 – Azure Arc | Microsoft Learn. After reviewing Azure Arc and licensing terms, connect your servers to Azure Arc, provision new WS2012/R2 ESU licenses in Azure portal specifying Volume Licensing entitlements, and link your servers to these licenses for enrollment. With just three months until the end of Year 1, the time is now to transition to WS2012/R2 Extended Security Updates enabled by Azure Arc.
Microsoft Tech Community – Latest Blogs –Read More
while writing script for automatic Model advisor check for a model , getting this error
Post Content Post Content automate model advisor, model advisor MATLAB Answers — New Questions
I need to create a polygon or buffer along an irregular shaped coastline in a 2D array of gridded sea temperature data.
I have a 578×235 grid of ocean temperature data in which the land pixels are NaN. I would like to create a 1 pixel buffer around the coastline so that all pixels adjacent to an existing NaN also become NaN. I’m unsure on the best way to do this. I thought I could create a logical array of NaNs and not NaNs but am not sure on the best way to extract a polygon and create the buffer after that step.I have a 578×235 grid of ocean temperature data in which the land pixels are NaN. I would like to create a 1 pixel buffer around the coastline so that all pixels adjacent to an existing NaN also become NaN. I’m unsure on the best way to do this. I thought I could create a logical array of NaNs and not NaNs but am not sure on the best way to extract a polygon and create the buffer after that step. I have a 578×235 grid of ocean temperature data in which the land pixels are NaN. I would like to create a 1 pixel buffer around the coastline so that all pixels adjacent to an existing NaN also become NaN. I’m unsure on the best way to do this. I thought I could create a logical array of NaNs and not NaNs but am not sure on the best way to extract a polygon and create the buffer after that step. extract polygon, buffer MATLAB Answers — New Questions
Training Data type error for a CNN using trainnet function
Trying to use a convolution1dLayer for my sequence input data put when I try to train it i get the error:
"Error using trainnet
Invalid targets. Network expects numeric or categorical targets, but received a cell array."
I’ve looked at many exemples of how the data must be structed but even if is in the same format, it doesn’t work.
For the predictors I’m doing a test with only 4 observations, each one with 4 features and 36191 points:
For the targets there are also for observations with only one target each and also 36191 points:
I can’t understand why it doesn’t accept it, like I said, its equal to many other exemples. I leave down here the code for the CNN-LSTM network and the trainnet function:
lgraph = layerGraph();
tempLayers = [
sequenceInputLayer(4,"Name","input")
convolution1dLayer(4,32,"Name","conv1d","Padding","same")
globalAveragePooling1dLayer("Name","gapool1d")];
lgraph = addLayers(lgraph,tempLayers);
tempLayers = lstmLayer(25,"Name","lstm");
lgraph = addLayers(lgraph,tempLayers);
tempLayers = lstmLayer(25,"Name","lstm_1");
lgraph = addLayers(lgraph,tempLayers);
tempLayers = [
concatenationLayer(1,2,"Name","concat")
lstmLayer(55,"Name","lstm_2")
dropoutLayer(0.5,"Name","drop")
fullyConnectedLayer(1,"Name","fc")
sigmoidLayer("Name","sigmoid")];
lgraph = addLayers(lgraph,tempLayers);
% clean up helper variable
clear tempLayers;
lgraph = connectLayers(lgraph,"gapool1d","lstm");
lgraph = connectLayers(lgraph,"gapool1d","lstm_1");
lgraph = connectLayers(lgraph,"lstm","concat/in1");
lgraph = connectLayers(lgraph,"lstm_1","concat/in2");
plot(lgraph);
epochs = 800;
miniBatchSize = 128;
LRDropPeriod = 200;
InitialLR = 0.01;
LRDropFactor = 0.1;
valFrequency = 30;
options = trainingOptions("adam", …
MaxEpochs=epochs, …
SequencePaddingDirection="left", …
Shuffle="every-epoch", …
GradientThreshold=1, …
InitialLearnRate=InitialLR, …
LearnRateSchedule="piecewise", …
LearnRateDropPeriod=LRDropPeriod, …
LearnRateDropFactor=LRDropFactor, …
MiniBatchSize=miniBatchSize, …
Plots="training-progress", …
Metrics="rmse", …
Verbose=0, …
ExecutionEnvironment="parallel");
CNN_LTSM = trainnet(trainDataX, trainDataY, dlnetwork(lgraph),"mse",options);
using version 2023bTrying to use a convolution1dLayer for my sequence input data put when I try to train it i get the error:
"Error using trainnet
Invalid targets. Network expects numeric or categorical targets, but received a cell array."
I’ve looked at many exemples of how the data must be structed but even if is in the same format, it doesn’t work.
For the predictors I’m doing a test with only 4 observations, each one with 4 features and 36191 points:
For the targets there are also for observations with only one target each and also 36191 points:
I can’t understand why it doesn’t accept it, like I said, its equal to many other exemples. I leave down here the code for the CNN-LSTM network and the trainnet function:
lgraph = layerGraph();
tempLayers = [
sequenceInputLayer(4,"Name","input")
convolution1dLayer(4,32,"Name","conv1d","Padding","same")
globalAveragePooling1dLayer("Name","gapool1d")];
lgraph = addLayers(lgraph,tempLayers);
tempLayers = lstmLayer(25,"Name","lstm");
lgraph = addLayers(lgraph,tempLayers);
tempLayers = lstmLayer(25,"Name","lstm_1");
lgraph = addLayers(lgraph,tempLayers);
tempLayers = [
concatenationLayer(1,2,"Name","concat")
lstmLayer(55,"Name","lstm_2")
dropoutLayer(0.5,"Name","drop")
fullyConnectedLayer(1,"Name","fc")
sigmoidLayer("Name","sigmoid")];
lgraph = addLayers(lgraph,tempLayers);
% clean up helper variable
clear tempLayers;
lgraph = connectLayers(lgraph,"gapool1d","lstm");
lgraph = connectLayers(lgraph,"gapool1d","lstm_1");
lgraph = connectLayers(lgraph,"lstm","concat/in1");
lgraph = connectLayers(lgraph,"lstm_1","concat/in2");
plot(lgraph);
epochs = 800;
miniBatchSize = 128;
LRDropPeriod = 200;
InitialLR = 0.01;
LRDropFactor = 0.1;
valFrequency = 30;
options = trainingOptions("adam", …
MaxEpochs=epochs, …
SequencePaddingDirection="left", …
Shuffle="every-epoch", …
GradientThreshold=1, …
InitialLearnRate=InitialLR, …
LearnRateSchedule="piecewise", …
LearnRateDropPeriod=LRDropPeriod, …
LearnRateDropFactor=LRDropFactor, …
MiniBatchSize=miniBatchSize, …
Plots="training-progress", …
Metrics="rmse", …
Verbose=0, …
ExecutionEnvironment="parallel");
CNN_LTSM = trainnet(trainDataX, trainDataY, dlnetwork(lgraph),"mse",options);
using version 2023b Trying to use a convolution1dLayer for my sequence input data put when I try to train it i get the error:
"Error using trainnet
Invalid targets. Network expects numeric or categorical targets, but received a cell array."
I’ve looked at many exemples of how the data must be structed but even if is in the same format, it doesn’t work.
For the predictors I’m doing a test with only 4 observations, each one with 4 features and 36191 points:
For the targets there are also for observations with only one target each and also 36191 points:
I can’t understand why it doesn’t accept it, like I said, its equal to many other exemples. I leave down here the code for the CNN-LSTM network and the trainnet function:
lgraph = layerGraph();
tempLayers = [
sequenceInputLayer(4,"Name","input")
convolution1dLayer(4,32,"Name","conv1d","Padding","same")
globalAveragePooling1dLayer("Name","gapool1d")];
lgraph = addLayers(lgraph,tempLayers);
tempLayers = lstmLayer(25,"Name","lstm");
lgraph = addLayers(lgraph,tempLayers);
tempLayers = lstmLayer(25,"Name","lstm_1");
lgraph = addLayers(lgraph,tempLayers);
tempLayers = [
concatenationLayer(1,2,"Name","concat")
lstmLayer(55,"Name","lstm_2")
dropoutLayer(0.5,"Name","drop")
fullyConnectedLayer(1,"Name","fc")
sigmoidLayer("Name","sigmoid")];
lgraph = addLayers(lgraph,tempLayers);
% clean up helper variable
clear tempLayers;
lgraph = connectLayers(lgraph,"gapool1d","lstm");
lgraph = connectLayers(lgraph,"gapool1d","lstm_1");
lgraph = connectLayers(lgraph,"lstm","concat/in1");
lgraph = connectLayers(lgraph,"lstm_1","concat/in2");
plot(lgraph);
epochs = 800;
miniBatchSize = 128;
LRDropPeriod = 200;
InitialLR = 0.01;
LRDropFactor = 0.1;
valFrequency = 30;
options = trainingOptions("adam", …
MaxEpochs=epochs, …
SequencePaddingDirection="left", …
Shuffle="every-epoch", …
GradientThreshold=1, …
InitialLearnRate=InitialLR, …
LearnRateSchedule="piecewise", …
LearnRateDropPeriod=LRDropPeriod, …
LearnRateDropFactor=LRDropFactor, …
MiniBatchSize=miniBatchSize, …
Plots="training-progress", …
Metrics="rmse", …
Verbose=0, …
ExecutionEnvironment="parallel");
CNN_LTSM = trainnet(trainDataX, trainDataY, dlnetwork(lgraph),"mse",options);
using version 2023b deep learning, cnn, data, neural network, machine learning MATLAB Answers — New Questions