Tag Archives: matlab
C-RNN dual output regression
Hi. I am writing a C-RNN regression learning code with single matrix input – dual scalar output. The loaded "paddedData2.mat" file is saved as paddedData, and it is stored as an N X 3 cell, as shown in the attached image. The input matrix used for training is the 3rd column of paddedData, which is [440 5] double, and the regression variable is the values in the 1st column. With this, I plan to create features of size [436 1] using two [3 3] kernels of convolution and train them using LSTM. The code is as follows. But it doesn’t work and the error code "trainnet (line 46), Error forming mini-batch of targets for network output "fc_1". Data interpreted with format "BC". To specify a different format use the TargetDataFormats option."
How can I modify the code?
clc;
clear all;
load("paddedData2.mat","-mat")
XTrain = paddedData(:,3);
YTrain1 = cell2mat(paddedData(:,1));
YTrain2 = cell2mat(paddedData(:,2));
dsX = arrayDatastore(XTrain, ‘OutputType’, ‘same’);
dsY1 = arrayDatastore(YTrain1, ‘OutputType’, ‘same’);
dsY2 = arrayDatastore(YTrain2, ‘OutputType’, ‘same’);
net = dlnetwork;
tempNet = [
sequenceInputLayer([440 5 1],"Name","sequenceinput")
convolution2dLayer([3 3],8,"Name","conv_A1")
batchNormalizationLayer("Name","batchnorm_A1")
reluLayer("Name","relu_A1")
convolution2dLayer([3 3],8,"Name","conv_2")
batchNormalizationLayer("Name","batchnorm_2")
reluLayer("Name","relu_2")
flattenLayer("Name","flatten")
fullyConnectedLayer(100,"Name","fc")
lstmLayer(100,"Name","lstm","OutputMode","last")];
net = addLayers(net,tempNet);
tempNet = fullyConnectedLayer(1,"Name","fc_1");
net = addLayers(net,tempNet);
tempNet = fullyConnectedLayer(1,"Name","fc_2");
net = addLayers(net,tempNet);
clear tempNet;
net = connectLayers(net,"lstm","fc_1");
net = connectLayers(net,"lstm","fc_2");
net = initialize(net);
options = trainingOptions(‘adam’, …
‘MaxEpochs’, 2000, …
‘MiniBatchSize’, 100, …
‘Shuffle’, ‘every-epoch’, …
‘Plots’, ‘training-progress’);
lossFcn = @(Y1,Y2,dsY1,dsY2) crossentropy(Y1,dsY1) + 0.1*mse(Y2,dsY2);
net = trainnet(dsX, net, lossFcn, options);Hi. I am writing a C-RNN regression learning code with single matrix input – dual scalar output. The loaded "paddedData2.mat" file is saved as paddedData, and it is stored as an N X 3 cell, as shown in the attached image. The input matrix used for training is the 3rd column of paddedData, which is [440 5] double, and the regression variable is the values in the 1st column. With this, I plan to create features of size [436 1] using two [3 3] kernels of convolution and train them using LSTM. The code is as follows. But it doesn’t work and the error code "trainnet (line 46), Error forming mini-batch of targets for network output "fc_1". Data interpreted with format "BC". To specify a different format use the TargetDataFormats option."
How can I modify the code?
clc;
clear all;
load("paddedData2.mat","-mat")
XTrain = paddedData(:,3);
YTrain1 = cell2mat(paddedData(:,1));
YTrain2 = cell2mat(paddedData(:,2));
dsX = arrayDatastore(XTrain, ‘OutputType’, ‘same’);
dsY1 = arrayDatastore(YTrain1, ‘OutputType’, ‘same’);
dsY2 = arrayDatastore(YTrain2, ‘OutputType’, ‘same’);
net = dlnetwork;
tempNet = [
sequenceInputLayer([440 5 1],"Name","sequenceinput")
convolution2dLayer([3 3],8,"Name","conv_A1")
batchNormalizationLayer("Name","batchnorm_A1")
reluLayer("Name","relu_A1")
convolution2dLayer([3 3],8,"Name","conv_2")
batchNormalizationLayer("Name","batchnorm_2")
reluLayer("Name","relu_2")
flattenLayer("Name","flatten")
fullyConnectedLayer(100,"Name","fc")
lstmLayer(100,"Name","lstm","OutputMode","last")];
net = addLayers(net,tempNet);
tempNet = fullyConnectedLayer(1,"Name","fc_1");
net = addLayers(net,tempNet);
tempNet = fullyConnectedLayer(1,"Name","fc_2");
net = addLayers(net,tempNet);
clear tempNet;
net = connectLayers(net,"lstm","fc_1");
net = connectLayers(net,"lstm","fc_2");
net = initialize(net);
options = trainingOptions(‘adam’, …
‘MaxEpochs’, 2000, …
‘MiniBatchSize’, 100, …
‘Shuffle’, ‘every-epoch’, …
‘Plots’, ‘training-progress’);
lossFcn = @(Y1,Y2,dsY1,dsY2) crossentropy(Y1,dsY1) + 0.1*mse(Y2,dsY2);
net = trainnet(dsX, net, lossFcn, options); Hi. I am writing a C-RNN regression learning code with single matrix input – dual scalar output. The loaded "paddedData2.mat" file is saved as paddedData, and it is stored as an N X 3 cell, as shown in the attached image. The input matrix used for training is the 3rd column of paddedData, which is [440 5] double, and the regression variable is the values in the 1st column. With this, I plan to create features of size [436 1] using two [3 3] kernels of convolution and train them using LSTM. The code is as follows. But it doesn’t work and the error code "trainnet (line 46), Error forming mini-batch of targets for network output "fc_1". Data interpreted with format "BC". To specify a different format use the TargetDataFormats option."
How can I modify the code?
clc;
clear all;
load("paddedData2.mat","-mat")
XTrain = paddedData(:,3);
YTrain1 = cell2mat(paddedData(:,1));
YTrain2 = cell2mat(paddedData(:,2));
dsX = arrayDatastore(XTrain, ‘OutputType’, ‘same’);
dsY1 = arrayDatastore(YTrain1, ‘OutputType’, ‘same’);
dsY2 = arrayDatastore(YTrain2, ‘OutputType’, ‘same’);
net = dlnetwork;
tempNet = [
sequenceInputLayer([440 5 1],"Name","sequenceinput")
convolution2dLayer([3 3],8,"Name","conv_A1")
batchNormalizationLayer("Name","batchnorm_A1")
reluLayer("Name","relu_A1")
convolution2dLayer([3 3],8,"Name","conv_2")
batchNormalizationLayer("Name","batchnorm_2")
reluLayer("Name","relu_2")
flattenLayer("Name","flatten")
fullyConnectedLayer(100,"Name","fc")
lstmLayer(100,"Name","lstm","OutputMode","last")];
net = addLayers(net,tempNet);
tempNet = fullyConnectedLayer(1,"Name","fc_1");
net = addLayers(net,tempNet);
tempNet = fullyConnectedLayer(1,"Name","fc_2");
net = addLayers(net,tempNet);
clear tempNet;
net = connectLayers(net,"lstm","fc_1");
net = connectLayers(net,"lstm","fc_2");
net = initialize(net);
options = trainingOptions(‘adam’, …
‘MaxEpochs’, 2000, …
‘MiniBatchSize’, 100, …
‘Shuffle’, ‘every-epoch’, …
‘Plots’, ‘training-progress’);
lossFcn = @(Y1,Y2,dsY1,dsY2) crossentropy(Y1,dsY1) + 0.1*mse(Y2,dsY2);
net = trainnet(dsX, net, lossFcn, options); deep learning, regression, multiple output MATLAB Answers — New Questions
Import to digsilent a dll generated starting from embedded coder in Simulink
Good morning
I read that MathWorks has developed a solution specifically for PowerFactory regarding to dll import.
Do you have some guidelines?
Could you help me?
Thank you for your time.
Regards,
AndreaGood morning
I read that MathWorks has developed a solution specifically for PowerFactory regarding to dll import.
Do you have some guidelines?
Could you help me?
Thank you for your time.
Regards,
Andrea Good morning
I read that MathWorks has developed a solution specifically for PowerFactory regarding to dll import.
Do you have some guidelines?
Could you help me?
Thank you for your time.
Regards,
Andrea dll, digsilent, embedded coder, software interface MATLAB Answers — New Questions
Can I call a Simulink generated DLL file in a Simulink model (Matlab 2018b)?
I have created a .dll file (see fig: PID_win64.dll) and associated headers (see fig: in PID_ert_shrlib_rtw) with the aid of Simulink (see fig: PID.slx). I now want to call it in a simulink model (see fig: test_dll.slx) where I am going to test it. I have read in older posts that I have to use S-Function block. Please let me know if this is the proper route I should follow and if so could you please share with me the exact steps (where should I allocate the name of dll and headers – which headers) ?
The final aim is to import the created .dll file in DIgSILENT POWERFACTORY. If anyone can share any further information regarding this would be highly appreciated.I have created a .dll file (see fig: PID_win64.dll) and associated headers (see fig: in PID_ert_shrlib_rtw) with the aid of Simulink (see fig: PID.slx). I now want to call it in a simulink model (see fig: test_dll.slx) where I am going to test it. I have read in older posts that I have to use S-Function block. Please let me know if this is the proper route I should follow and if so could you please share with me the exact steps (where should I allocate the name of dll and headers – which headers) ?
The final aim is to import the created .dll file in DIgSILENT POWERFACTORY. If anyone can share any further information regarding this would be highly appreciated. I have created a .dll file (see fig: PID_win64.dll) and associated headers (see fig: in PID_ert_shrlib_rtw) with the aid of Simulink (see fig: PID.slx). I now want to call it in a simulink model (see fig: test_dll.slx) where I am going to test it. I have read in older posts that I have to use S-Function block. Please let me know if this is the proper route I should follow and if so could you please share with me the exact steps (where should I allocate the name of dll and headers – which headers) ?
The final aim is to import the created .dll file in DIgSILENT POWERFACTORY. If anyone can share any further information regarding this would be highly appreciated. dll, s-function, 2018b, simulink, digsilent, powerfactory, import, call MATLAB Answers — New Questions
Generate 3D model from a 2D image
Hi friends,
I would like to generate a 3D model from a 2D image but I don’t have any clue.
From some good instruction, I have successfully generated a binary image with my defined masks. Here the reason why I want a binary image is that the 3D printer only accepts binary slices. I would like to just extrude my pixels into a 3D model (all numbers 1 need to be given the same height, but 0 don’t need a height or a neglectable height), and slice them with my printer software. Here is the first section to generate a good binary image. I know there is some other ways we can use to reconstruct a 3D model from a 2D image, but I just want do it in matlab.Hi friends,
I would like to generate a 3D model from a 2D image but I don’t have any clue.
From some good instruction, I have successfully generated a binary image with my defined masks. Here the reason why I want a binary image is that the 3D printer only accepts binary slices. I would like to just extrude my pixels into a 3D model (all numbers 1 need to be given the same height, but 0 don’t need a height or a neglectable height), and slice them with my printer software. Here is the first section to generate a good binary image. I know there is some other ways we can use to reconstruct a 3D model from a 2D image, but I just want do it in matlab. Hi friends,
I would like to generate a 3D model from a 2D image but I don’t have any clue.
From some good instruction, I have successfully generated a binary image with my defined masks. Here the reason why I want a binary image is that the 3D printer only accepts binary slices. I would like to just extrude my pixels into a 3D model (all numbers 1 need to be given the same height, but 0 don’t need a height or a neglectable height), and slice them with my printer software. Here is the first section to generate a good binary image. I know there is some other ways we can use to reconstruct a 3D model from a 2D image, but I just want do it in matlab. 3d plots, grayscale, binary image, digital image processing MATLAB Answers — New Questions
Erro apply Differential Evolution
When applying Differential Evolution, this error appeared and I was unable to resolve it. Does anyone know how to solve it, please?When applying Differential Evolution, this error appeared and I was unable to resolve it. Does anyone know how to solve it, please? When applying Differential Evolution, this error appeared and I was unable to resolve it. Does anyone know how to solve it, please? matlab, differential evolution MATLAB Answers — New Questions
Modelling anisotropic materials in PDE Toolbox
Hi I’m using the PDE toolbox (unified workflow) to model electromagnetics (DC conduction). I’m working with a material that is anisotropic in its conductivity, ie. it has a conductivity of = 0.6 S/m in the x-direction and = 0.087 S/m in the y-direction. Right now it seems I can only set isotropic conductivity using the code below, where the conductivity is set to 0.6 S/m in all directions:
model.MaterialProperties(1) = materialProperties(ElectricalConductivity=0.6,RelativePermittivity=4.96e4);
I know you can use function handles to alter the way that the material property is applied spatially, but how would someone do this for properties that depend on the direction (x or y)?
Thanks for the help.Hi I’m using the PDE toolbox (unified workflow) to model electromagnetics (DC conduction). I’m working with a material that is anisotropic in its conductivity, ie. it has a conductivity of = 0.6 S/m in the x-direction and = 0.087 S/m in the y-direction. Right now it seems I can only set isotropic conductivity using the code below, where the conductivity is set to 0.6 S/m in all directions:
model.MaterialProperties(1) = materialProperties(ElectricalConductivity=0.6,RelativePermittivity=4.96e4);
I know you can use function handles to alter the way that the material property is applied spatially, but how would someone do this for properties that depend on the direction (x or y)?
Thanks for the help. Hi I’m using the PDE toolbox (unified workflow) to model electromagnetics (DC conduction). I’m working with a material that is anisotropic in its conductivity, ie. it has a conductivity of = 0.6 S/m in the x-direction and = 0.087 S/m in the y-direction. Right now it seems I can only set isotropic conductivity using the code below, where the conductivity is set to 0.6 S/m in all directions:
model.MaterialProperties(1) = materialProperties(ElectricalConductivity=0.6,RelativePermittivity=4.96e4);
I know you can use function handles to alter the way that the material property is applied spatially, but how would someone do this for properties that depend on the direction (x or y)?
Thanks for the help. anisotropic, material property, pde toolbox MATLAB Answers — New Questions
How can I make the layout in the attached image with tiledlayout
I am able to get plot 1 and plot 2 in but not 3, 4 and 5. I can also get 3, 4 and 5 in without plot 1 and 2 but that is not what I want, per the attached image.I am able to get plot 1 and plot 2 in but not 3, 4 and 5. I can also get 3, 4 and 5 in without plot 1 and 2 but that is not what I want, per the attached image. I am able to get plot 1 and plot 2 in but not 3, 4 and 5. I can also get 3, 4 and 5 in without plot 1 and 2 but that is not what I want, per the attached image. tiledlayout MATLAB Answers — New Questions
How to fix java.lang.ClassNotFoundException: com.mathworks.toolbox.javabuilder.MWException?
I have a RESTful API project where I use Spring Boot Maven. I’m also processing in Matlab with a jar file. I converted this project to .jar file, but running it with java -jar demo.jar closes after running. I got some errors. However, since it was a restful API, it had to remain open so that I could access the APIs.
In VS Code Java: Java 11
Matlab: R2018a, JRE 1.8
Errors that are related to Matlab also, after java -jar demo.jar:
Error starting ApplicationContext. To display the conditions report re-run your application with ‘debug’ enabled.
2023-10-02 11:21:51.075 ERROR 11484 — [ main] o.s.boot.SpringApplication : Application run failed
org.springframework.beans.factory.BeanCreationException: Error creating bean with name ‘beso3D_PD’: Lookup method resolution failed; nested exception is java.lang.IllegalStateException: Failed to introspect Class [peridynamics.Beso3D_PD] from ClassLoader [org.springframework.boot.loader.LaunchedURLClassLoader@1ed6993a]
at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.determineCandidateConstructors(AutowiredAnnotationBeanPostProcessor.java:298) ~[spring-beans-5.3.30.jar!/:5.3.30]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.determineConstructorsFromBeanPostProcessors(AbstractAutowireCapableBeanFactory.java:1302) ~[spring-beans-5.3.30.jar!/:5.3.30]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1219) ~[spring-beans-5.3.30.jar!/:5.3.30]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:582) ~[spring-beans-5.3.30.jar!/:5.3.30]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:542) ~[spring-beans-5.3.30.jar!/:5.3.30]
at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:335) ~[spring-beans-5.3.30.jar!/:5.3.30]
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:234) ~[spring-beans-5.3.30.jar!/:5.3.30]
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:333) ~[spring-beans-5.3.30.jar!/:5.3.30]
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:208) ~[spring-beans-5.3.30.jar!/:5.3.30]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:955) ~[spring-beans-5.3.30.jar!/:5.3.30]
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:921) ~[spring-context-5.3.30.jar!/:5.3.30]
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:583) ~[spring-context-5.3.30.jar!/:5.3.30]
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:147) ~[spring-boot-2.7.16.jar!/:2.7.16]
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:731) ~[spring-boot-2.7.16.jar!/:2.7.16]
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:408) ~[spring-boot-2.7.16.jar!/:2.7.16]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:307) ~[spring-boot-2.7.16.jar!/:2.7.16]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1303) ~[spring-boot-2.7.16.jar!/:2.7.16]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1292) ~[spring-boot-2.7.16.jar!/:2.7.16]
at peridynamics.demoApp.main(demoApp.java:11) ~[classes!/:0.0.1-SNAPSHOT]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:na]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) ~[na:na]
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:na]
at java.base/java.lang.reflect.Method.invoke(Method.java:568) ~[na:na]
at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:49) ~[demo1-0.0.1-SNAPSHOT.jar:0.0.1-SNAPSHOT]
at org.springframework.boot.loader.Launcher.launch(Launcher.java:108) ~[demo1-0.0.1-SNAPSHOT.jar:0.0.1-SNAPSHOT]
at org.springframework.boot.loader.Launcher.launch(Launcher.java:58) ~[demo1-0.0.1-SNAPSHOT.jar:0.0.1-SNAPSHOT]
at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:65) ~[demo1-0.0.1-SNAPSHOT.jar:0.0.1-SNAPSHOT]
Caused by: java.lang.IllegalStateException: Failed to introspect Class [peridynamics.Beso3D_PD] from ClassLoader [org.springframework.boot.loader.LaunchedURLClassLoader@1ed6993a]
at org.springframework.util.ReflectionUtils.getDeclaredMethods(ReflectionUtils.java:485) ~[spring-core-5.3.30.jar!/:5.3.30]
at org.springframework.util.ReflectionUtils.doWithLocalMethods(ReflectionUtils.java:321) ~[spring-core-5.3.30.jar!/:5.3.30]
at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.determineCandidateConstructors(AutowiredAnnotationBeanPostProcessor.java:276) ~[spring-beans-5.3.30.jar!/:5.3.30]
… 26 common frames omitted
Caused by: java.lang.NoClassDefFoundError: com/mathworks/toolbox/javabuilder/MWException
at java.base/java.lang.Class.getDeclaredMethods0(Native Method) ~[na:na]
at java.base/java.lang.Class.privateGetDeclaredMethods(Class.java:3402) ~[na:na]
at java.base/java.lang.Class.getDeclaredMethods(Class.java:2504) ~[na:na]
at org.springframework.util.ReflectionUtils.getDeclaredMethods(ReflectionUtils.java:467) ~[spring-core-5.3.30.jar!/:5.3.30]
… 28 common frames omitted
Caused by: java.lang.ClassNotFoundException: com.mathworks.toolbox.javabuilder.MWException
at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:445) ~[na:na]
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:587) ~[na:na]
at org.springframework.boot.loader.LaunchedURLClassLoader.loadClass(LaunchedURLClassLoader.java:151) ~[demo1-0.0.1-SNAPSHOT.jar:0.0.1-SNAPSHOT]
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:520) ~[na:na]
… 32 common frames omitted
How to solve this problem?I have a RESTful API project where I use Spring Boot Maven. I’m also processing in Matlab with a jar file. I converted this project to .jar file, but running it with java -jar demo.jar closes after running. I got some errors. However, since it was a restful API, it had to remain open so that I could access the APIs.
In VS Code Java: Java 11
Matlab: R2018a, JRE 1.8
Errors that are related to Matlab also, after java -jar demo.jar:
Error starting ApplicationContext. To display the conditions report re-run your application with ‘debug’ enabled.
2023-10-02 11:21:51.075 ERROR 11484 — [ main] o.s.boot.SpringApplication : Application run failed
org.springframework.beans.factory.BeanCreationException: Error creating bean with name ‘beso3D_PD’: Lookup method resolution failed; nested exception is java.lang.IllegalStateException: Failed to introspect Class [peridynamics.Beso3D_PD] from ClassLoader [org.springframework.boot.loader.LaunchedURLClassLoader@1ed6993a]
at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.determineCandidateConstructors(AutowiredAnnotationBeanPostProcessor.java:298) ~[spring-beans-5.3.30.jar!/:5.3.30]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.determineConstructorsFromBeanPostProcessors(AbstractAutowireCapableBeanFactory.java:1302) ~[spring-beans-5.3.30.jar!/:5.3.30]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1219) ~[spring-beans-5.3.30.jar!/:5.3.30]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:582) ~[spring-beans-5.3.30.jar!/:5.3.30]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:542) ~[spring-beans-5.3.30.jar!/:5.3.30]
at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:335) ~[spring-beans-5.3.30.jar!/:5.3.30]
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:234) ~[spring-beans-5.3.30.jar!/:5.3.30]
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:333) ~[spring-beans-5.3.30.jar!/:5.3.30]
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:208) ~[spring-beans-5.3.30.jar!/:5.3.30]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:955) ~[spring-beans-5.3.30.jar!/:5.3.30]
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:921) ~[spring-context-5.3.30.jar!/:5.3.30]
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:583) ~[spring-context-5.3.30.jar!/:5.3.30]
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:147) ~[spring-boot-2.7.16.jar!/:2.7.16]
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:731) ~[spring-boot-2.7.16.jar!/:2.7.16]
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:408) ~[spring-boot-2.7.16.jar!/:2.7.16]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:307) ~[spring-boot-2.7.16.jar!/:2.7.16]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1303) ~[spring-boot-2.7.16.jar!/:2.7.16]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1292) ~[spring-boot-2.7.16.jar!/:2.7.16]
at peridynamics.demoApp.main(demoApp.java:11) ~[classes!/:0.0.1-SNAPSHOT]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:na]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) ~[na:na]
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:na]
at java.base/java.lang.reflect.Method.invoke(Method.java:568) ~[na:na]
at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:49) ~[demo1-0.0.1-SNAPSHOT.jar:0.0.1-SNAPSHOT]
at org.springframework.boot.loader.Launcher.launch(Launcher.java:108) ~[demo1-0.0.1-SNAPSHOT.jar:0.0.1-SNAPSHOT]
at org.springframework.boot.loader.Launcher.launch(Launcher.java:58) ~[demo1-0.0.1-SNAPSHOT.jar:0.0.1-SNAPSHOT]
at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:65) ~[demo1-0.0.1-SNAPSHOT.jar:0.0.1-SNAPSHOT]
Caused by: java.lang.IllegalStateException: Failed to introspect Class [peridynamics.Beso3D_PD] from ClassLoader [org.springframework.boot.loader.LaunchedURLClassLoader@1ed6993a]
at org.springframework.util.ReflectionUtils.getDeclaredMethods(ReflectionUtils.java:485) ~[spring-core-5.3.30.jar!/:5.3.30]
at org.springframework.util.ReflectionUtils.doWithLocalMethods(ReflectionUtils.java:321) ~[spring-core-5.3.30.jar!/:5.3.30]
at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.determineCandidateConstructors(AutowiredAnnotationBeanPostProcessor.java:276) ~[spring-beans-5.3.30.jar!/:5.3.30]
… 26 common frames omitted
Caused by: java.lang.NoClassDefFoundError: com/mathworks/toolbox/javabuilder/MWException
at java.base/java.lang.Class.getDeclaredMethods0(Native Method) ~[na:na]
at java.base/java.lang.Class.privateGetDeclaredMethods(Class.java:3402) ~[na:na]
at java.base/java.lang.Class.getDeclaredMethods(Class.java:2504) ~[na:na]
at org.springframework.util.ReflectionUtils.getDeclaredMethods(ReflectionUtils.java:467) ~[spring-core-5.3.30.jar!/:5.3.30]
… 28 common frames omitted
Caused by: java.lang.ClassNotFoundException: com.mathworks.toolbox.javabuilder.MWException
at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:445) ~[na:na]
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:587) ~[na:na]
at org.springframework.boot.loader.LaunchedURLClassLoader.loadClass(LaunchedURLClassLoader.java:151) ~[demo1-0.0.1-SNAPSHOT.jar:0.0.1-SNAPSHOT]
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:520) ~[na:na]
… 32 common frames omitted
How to solve this problem? I have a RESTful API project where I use Spring Boot Maven. I’m also processing in Matlab with a jar file. I converted this project to .jar file, but running it with java -jar demo.jar closes after running. I got some errors. However, since it was a restful API, it had to remain open so that I could access the APIs.
In VS Code Java: Java 11
Matlab: R2018a, JRE 1.8
Errors that are related to Matlab also, after java -jar demo.jar:
Error starting ApplicationContext. To display the conditions report re-run your application with ‘debug’ enabled.
2023-10-02 11:21:51.075 ERROR 11484 — [ main] o.s.boot.SpringApplication : Application run failed
org.springframework.beans.factory.BeanCreationException: Error creating bean with name ‘beso3D_PD’: Lookup method resolution failed; nested exception is java.lang.IllegalStateException: Failed to introspect Class [peridynamics.Beso3D_PD] from ClassLoader [org.springframework.boot.loader.LaunchedURLClassLoader@1ed6993a]
at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.determineCandidateConstructors(AutowiredAnnotationBeanPostProcessor.java:298) ~[spring-beans-5.3.30.jar!/:5.3.30]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.determineConstructorsFromBeanPostProcessors(AbstractAutowireCapableBeanFactory.java:1302) ~[spring-beans-5.3.30.jar!/:5.3.30]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1219) ~[spring-beans-5.3.30.jar!/:5.3.30]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:582) ~[spring-beans-5.3.30.jar!/:5.3.30]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:542) ~[spring-beans-5.3.30.jar!/:5.3.30]
at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:335) ~[spring-beans-5.3.30.jar!/:5.3.30]
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:234) ~[spring-beans-5.3.30.jar!/:5.3.30]
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:333) ~[spring-beans-5.3.30.jar!/:5.3.30]
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:208) ~[spring-beans-5.3.30.jar!/:5.3.30]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:955) ~[spring-beans-5.3.30.jar!/:5.3.30]
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:921) ~[spring-context-5.3.30.jar!/:5.3.30]
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:583) ~[spring-context-5.3.30.jar!/:5.3.30]
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:147) ~[spring-boot-2.7.16.jar!/:2.7.16]
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:731) ~[spring-boot-2.7.16.jar!/:2.7.16]
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:408) ~[spring-boot-2.7.16.jar!/:2.7.16]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:307) ~[spring-boot-2.7.16.jar!/:2.7.16]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1303) ~[spring-boot-2.7.16.jar!/:2.7.16]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1292) ~[spring-boot-2.7.16.jar!/:2.7.16]
at peridynamics.demoApp.main(demoApp.java:11) ~[classes!/:0.0.1-SNAPSHOT]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:na]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) ~[na:na]
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:na]
at java.base/java.lang.reflect.Method.invoke(Method.java:568) ~[na:na]
at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:49) ~[demo1-0.0.1-SNAPSHOT.jar:0.0.1-SNAPSHOT]
at org.springframework.boot.loader.Launcher.launch(Launcher.java:108) ~[demo1-0.0.1-SNAPSHOT.jar:0.0.1-SNAPSHOT]
at org.springframework.boot.loader.Launcher.launch(Launcher.java:58) ~[demo1-0.0.1-SNAPSHOT.jar:0.0.1-SNAPSHOT]
at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:65) ~[demo1-0.0.1-SNAPSHOT.jar:0.0.1-SNAPSHOT]
Caused by: java.lang.IllegalStateException: Failed to introspect Class [peridynamics.Beso3D_PD] from ClassLoader [org.springframework.boot.loader.LaunchedURLClassLoader@1ed6993a]
at org.springframework.util.ReflectionUtils.getDeclaredMethods(ReflectionUtils.java:485) ~[spring-core-5.3.30.jar!/:5.3.30]
at org.springframework.util.ReflectionUtils.doWithLocalMethods(ReflectionUtils.java:321) ~[spring-core-5.3.30.jar!/:5.3.30]
at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.determineCandidateConstructors(AutowiredAnnotationBeanPostProcessor.java:276) ~[spring-beans-5.3.30.jar!/:5.3.30]
… 26 common frames omitted
Caused by: java.lang.NoClassDefFoundError: com/mathworks/toolbox/javabuilder/MWException
at java.base/java.lang.Class.getDeclaredMethods0(Native Method) ~[na:na]
at java.base/java.lang.Class.privateGetDeclaredMethods(Class.java:3402) ~[na:na]
at java.base/java.lang.Class.getDeclaredMethods(Class.java:2504) ~[na:na]
at org.springframework.util.ReflectionUtils.getDeclaredMethods(ReflectionUtils.java:467) ~[spring-core-5.3.30.jar!/:5.3.30]
… 28 common frames omitted
Caused by: java.lang.ClassNotFoundException: com.mathworks.toolbox.javabuilder.MWException
at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:445) ~[na:na]
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:587) ~[na:na]
at org.springframework.boot.loader.LaunchedURLClassLoader.loadClass(LaunchedURLClassLoader.java:151) ~[demo1-0.0.1-SNAPSHOT.jar:0.0.1-SNAPSHOT]
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:520) ~[na:na]
… 32 common frames omitted
How to solve this problem? matlab, spring boot, mwexception MATLAB Answers — New Questions
Conditional array accumulation inside parfor
I have a situation were I am testing a condition inside a parfor loop, and if true append the results of a computation to an array. A simplified example is as follows
ary = [];
parfor n=1:N
for m = 1:M
if (f(m,n)>0) % do some test, this is not easily vectorizable
ary = [ary; n m];
end
end
end
I would like, however, to avoid growing arrays in the loop.
I could estimate an upperbound for the size of ary and try to do it this way,
ary = zeros(ubound,2);
ind = 0;
parfor n=1:N
for m = 1:M
if (f(m,n)>0) % do some test, this is not easily vectorizable
ind = ind + 1;
ary(ind,:) = [n m]; % such indexing will not work within parfor
end
end
end
but that wouldn’t work as shown in the comment.
Another idea I had was using a logical array to keep track of the conditional result.
condary = false(N*M);
for k = 1:N*M % flatten the loop
% get n and m from k; k = (n-1)*M+m, therefore
m = mod(k,M); if m == 0, m = M; end
n = (k-m)/M+1;
if (f(m,n)>0)
condary(k) = true;
end
end
The desired array, ary, can then be back-constructed from the logical array in a second loop. In fact, ary, can be preallocated at this point. Or the operations meant to be performed using ary can be performed based on condary in a second loop. But this involves flattening the loop.
I was wondering if there are any better ways to do this.I have a situation were I am testing a condition inside a parfor loop, and if true append the results of a computation to an array. A simplified example is as follows
ary = [];
parfor n=1:N
for m = 1:M
if (f(m,n)>0) % do some test, this is not easily vectorizable
ary = [ary; n m];
end
end
end
I would like, however, to avoid growing arrays in the loop.
I could estimate an upperbound for the size of ary and try to do it this way,
ary = zeros(ubound,2);
ind = 0;
parfor n=1:N
for m = 1:M
if (f(m,n)>0) % do some test, this is not easily vectorizable
ind = ind + 1;
ary(ind,:) = [n m]; % such indexing will not work within parfor
end
end
end
but that wouldn’t work as shown in the comment.
Another idea I had was using a logical array to keep track of the conditional result.
condary = false(N*M);
for k = 1:N*M % flatten the loop
% get n and m from k; k = (n-1)*M+m, therefore
m = mod(k,M); if m == 0, m = M; end
n = (k-m)/M+1;
if (f(m,n)>0)
condary(k) = true;
end
end
The desired array, ary, can then be back-constructed from the logical array in a second loop. In fact, ary, can be preallocated at this point. Or the operations meant to be performed using ary can be performed based on condary in a second loop. But this involves flattening the loop.
I was wondering if there are any better ways to do this. I have a situation were I am testing a condition inside a parfor loop, and if true append the results of a computation to an array. A simplified example is as follows
ary = [];
parfor n=1:N
for m = 1:M
if (f(m,n)>0) % do some test, this is not easily vectorizable
ary = [ary; n m];
end
end
end
I would like, however, to avoid growing arrays in the loop.
I could estimate an upperbound for the size of ary and try to do it this way,
ary = zeros(ubound,2);
ind = 0;
parfor n=1:N
for m = 1:M
if (f(m,n)>0) % do some test, this is not easily vectorizable
ind = ind + 1;
ary(ind,:) = [n m]; % such indexing will not work within parfor
end
end
end
but that wouldn’t work as shown in the comment.
Another idea I had was using a logical array to keep track of the conditional result.
condary = false(N*M);
for k = 1:N*M % flatten the loop
% get n and m from k; k = (n-1)*M+m, therefore
m = mod(k,M); if m == 0, m = M; end
n = (k-m)/M+1;
if (f(m,n)>0)
condary(k) = true;
end
end
The desired array, ary, can then be back-constructed from the logical array in a second loop. In fact, ary, can be preallocated at this point. Or the operations meant to be performed using ary can be performed based on condary in a second loop. But this involves flattening the loop.
I was wondering if there are any better ways to do this. parfor, array, preallocation MATLAB Answers — New Questions
shift-mean pour l’image
S’il vous plaît, j’ai besoin d’un code d’algorithme Shift Mean pour la segmentation d’une image en niveaux de gris. Si quelqu’un peut m’aider, merci d’avance.S’il vous plaît, j’ai besoin d’un code d’algorithme Shift Mean pour la segmentation d’une image en niveaux de gris. Si quelqu’un peut m’aider, merci d’avance. S’il vous plaît, j’ai besoin d’un code d’algorithme Shift Mean pour la segmentation d’une image en niveaux de gris. Si quelqu’un peut m’aider, merci d’avance. shift mean, segmentation, image niveau gris MATLAB Answers — New Questions
identify faces of a 3D geometry
I have two 3D geometries composed of nodes and faces.
file_im = importdata("f_mm.mat");
nodes_e = file_im.nodes_e;
faces_e = file_im.faces_e;
g_P_sez = file_im.g_P_sez;
figure
trimesh(faces_e(:,:),nodes_e(:,1),nodes_e(:,2),nodes_e(:,3),’EdgeColor’,’k’,’Linewidth’,0.1,’Facecolor’,[255 0 0]/255,’FaceAlpha’,1)
hold on
plot3(g_P_sez(:,1),g_P_sez(:,2),g_P_sez(:,3),’k.’,’Markersize’,15)
hold off
axis equal
xlabel(‘x’)
ylabel(‘y’)
zlabel(‘z’)
I want to locate the faces of this geometry (yellow box) that are contained in ‘faces_e’.
Of help I have the node ‘g_P_sez’. So could select the faces at a distance X from that node.
There would be the nearestFace function but it is not suitable for my case. Are there alternatives?I have two 3D geometries composed of nodes and faces.
file_im = importdata("f_mm.mat");
nodes_e = file_im.nodes_e;
faces_e = file_im.faces_e;
g_P_sez = file_im.g_P_sez;
figure
trimesh(faces_e(:,:),nodes_e(:,1),nodes_e(:,2),nodes_e(:,3),’EdgeColor’,’k’,’Linewidth’,0.1,’Facecolor’,[255 0 0]/255,’FaceAlpha’,1)
hold on
plot3(g_P_sez(:,1),g_P_sez(:,2),g_P_sez(:,3),’k.’,’Markersize’,15)
hold off
axis equal
xlabel(‘x’)
ylabel(‘y’)
zlabel(‘z’)
I want to locate the faces of this geometry (yellow box) that are contained in ‘faces_e’.
Of help I have the node ‘g_P_sez’. So could select the faces at a distance X from that node.
There would be the nearestFace function but it is not suitable for my case. Are there alternatives? I have two 3D geometries composed of nodes and faces.
file_im = importdata("f_mm.mat");
nodes_e = file_im.nodes_e;
faces_e = file_im.faces_e;
g_P_sez = file_im.g_P_sez;
figure
trimesh(faces_e(:,:),nodes_e(:,1),nodes_e(:,2),nodes_e(:,3),’EdgeColor’,’k’,’Linewidth’,0.1,’Facecolor’,[255 0 0]/255,’FaceAlpha’,1)
hold on
plot3(g_P_sez(:,1),g_P_sez(:,2),g_P_sez(:,3),’k.’,’Markersize’,15)
hold off
axis equal
xlabel(‘x’)
ylabel(‘y’)
zlabel(‘z’)
I want to locate the faces of this geometry (yellow box) that are contained in ‘faces_e’.
Of help I have the node ‘g_P_sez’. So could select the faces at a distance X from that node.
There would be the nearestFace function but it is not suitable for my case. Are there alternatives? faces, geometry, 3d, 3d plots, select MATLAB Answers — New Questions
Physics-informed NN for parameter identification
Dear all,
I am trying to use the physics-informed neural network (PINN) for an inverse parameter identification for ODE or PDE.
I referenced the example in this link to write the code:https://ww2.mathworks.cn/matlabcentral/answers/2019216-physical-informed-neural-network-identify-coefficient-of-loss-function#answer_1312867
Here’s the program I wrote:
clear; clc;
% Specify training configuration
numEpochs = 500000;
avgG = [];
avgSqG = [];
batchSize = 500;
lossFcn = @modelLoss;
lr = 1e-5;
% Inverse PINN for d2x/dt2 = mu1*x + mu2*x^2
mu1Actual = -rand;
mu2Actual = rand;
x = @(t) cos(sqrt(-mu1Actual)*t) + sin(sqrt(-mu2Actual)*t);
maxT = 2*pi/sqrt(max(-mu1Actual, -mu2Actual));
t = dlarray(linspace(0, maxT, batchSize), "CB");
xactual = dlarray(x(t), "CB");
% Specify a network and initial guesses for mu1 and mu2
net = [
featureInputLayer(1)
fullyConnectedLayer(100)
tanhLayer
fullyConnectedLayer(100)
tanhLayer
fullyConnectedLayer(1)];
params.net = dlnetwork(net);
params.mu1 = dlarray(-0.5);
params.mu2 = dlarray(0.5);
% Train
for i = 1:numEpochs
[loss, grad] = dlfeval(lossFcn, t, xactual, params);
[params, avgG, avgSqG] = adamupdate(params, grad, avgG, avgSqG, i, lr);
if mod(i, 1000) == 0
fprintf("Epoch: %d, Predicted mu1: %.3f, Actual mu1: %.3f, Predicted mu2: %.3f, Actual mu2: %.3fn", …
i, extractdata(params.mu1), mu1Actual, extractdata(params.mu2), mu2Actual);
end
end
function [loss, grad] = modelLoss(t, x, params)
xpred = forward(params.net, t);
dxdt = dlgradient(sum(real(xpred)), t, ‘EnableHigherDerivatives’, true);
d2xdt2 = dlgradient(sum(dxdt), t);
% Modify the ODE residual based on your specific ODE
odeResidual = d2xdt2 – (params.mu1 * xpred + params.mu2 * xpred.^2);
% Compute the mean square error of the ODE residual
odeLoss = mean(odeResidual.^2);
% Compute the L2 difference between the predicted xpred and the true x.
dataLoss = l2loss(real(x), real(xpred)); % Ensure real part is used
% Sum the losses and take gradients
loss = odeLoss + dataLoss;
[grad.net, grad.mu1, grad.mu2] = dlgradient(loss, params.net.Learnables, params.mu1, params.mu2);
end
When I run the script no errors are reported, but the two parameters learned are not getting closer to the true values as the number of iterations increases:
I would like to know the reason for this situation and the corresponding solution, if you can help me to change the code I will be very grateful!Dear all,
I am trying to use the physics-informed neural network (PINN) for an inverse parameter identification for ODE or PDE.
I referenced the example in this link to write the code:https://ww2.mathworks.cn/matlabcentral/answers/2019216-physical-informed-neural-network-identify-coefficient-of-loss-function#answer_1312867
Here’s the program I wrote:
clear; clc;
% Specify training configuration
numEpochs = 500000;
avgG = [];
avgSqG = [];
batchSize = 500;
lossFcn = @modelLoss;
lr = 1e-5;
% Inverse PINN for d2x/dt2 = mu1*x + mu2*x^2
mu1Actual = -rand;
mu2Actual = rand;
x = @(t) cos(sqrt(-mu1Actual)*t) + sin(sqrt(-mu2Actual)*t);
maxT = 2*pi/sqrt(max(-mu1Actual, -mu2Actual));
t = dlarray(linspace(0, maxT, batchSize), "CB");
xactual = dlarray(x(t), "CB");
% Specify a network and initial guesses for mu1 and mu2
net = [
featureInputLayer(1)
fullyConnectedLayer(100)
tanhLayer
fullyConnectedLayer(100)
tanhLayer
fullyConnectedLayer(1)];
params.net = dlnetwork(net);
params.mu1 = dlarray(-0.5);
params.mu2 = dlarray(0.5);
% Train
for i = 1:numEpochs
[loss, grad] = dlfeval(lossFcn, t, xactual, params);
[params, avgG, avgSqG] = adamupdate(params, grad, avgG, avgSqG, i, lr);
if mod(i, 1000) == 0
fprintf("Epoch: %d, Predicted mu1: %.3f, Actual mu1: %.3f, Predicted mu2: %.3f, Actual mu2: %.3fn", …
i, extractdata(params.mu1), mu1Actual, extractdata(params.mu2), mu2Actual);
end
end
function [loss, grad] = modelLoss(t, x, params)
xpred = forward(params.net, t);
dxdt = dlgradient(sum(real(xpred)), t, ‘EnableHigherDerivatives’, true);
d2xdt2 = dlgradient(sum(dxdt), t);
% Modify the ODE residual based on your specific ODE
odeResidual = d2xdt2 – (params.mu1 * xpred + params.mu2 * xpred.^2);
% Compute the mean square error of the ODE residual
odeLoss = mean(odeResidual.^2);
% Compute the L2 difference between the predicted xpred and the true x.
dataLoss = l2loss(real(x), real(xpred)); % Ensure real part is used
% Sum the losses and take gradients
loss = odeLoss + dataLoss;
[grad.net, grad.mu1, grad.mu2] = dlgradient(loss, params.net.Learnables, params.mu1, params.mu2);
end
When I run the script no errors are reported, but the two parameters learned are not getting closer to the true values as the number of iterations increases:
I would like to know the reason for this situation and the corresponding solution, if you can help me to change the code I will be very grateful! Dear all,
I am trying to use the physics-informed neural network (PINN) for an inverse parameter identification for ODE or PDE.
I referenced the example in this link to write the code:https://ww2.mathworks.cn/matlabcentral/answers/2019216-physical-informed-neural-network-identify-coefficient-of-loss-function#answer_1312867
Here’s the program I wrote:
clear; clc;
% Specify training configuration
numEpochs = 500000;
avgG = [];
avgSqG = [];
batchSize = 500;
lossFcn = @modelLoss;
lr = 1e-5;
% Inverse PINN for d2x/dt2 = mu1*x + mu2*x^2
mu1Actual = -rand;
mu2Actual = rand;
x = @(t) cos(sqrt(-mu1Actual)*t) + sin(sqrt(-mu2Actual)*t);
maxT = 2*pi/sqrt(max(-mu1Actual, -mu2Actual));
t = dlarray(linspace(0, maxT, batchSize), "CB");
xactual = dlarray(x(t), "CB");
% Specify a network and initial guesses for mu1 and mu2
net = [
featureInputLayer(1)
fullyConnectedLayer(100)
tanhLayer
fullyConnectedLayer(100)
tanhLayer
fullyConnectedLayer(1)];
params.net = dlnetwork(net);
params.mu1 = dlarray(-0.5);
params.mu2 = dlarray(0.5);
% Train
for i = 1:numEpochs
[loss, grad] = dlfeval(lossFcn, t, xactual, params);
[params, avgG, avgSqG] = adamupdate(params, grad, avgG, avgSqG, i, lr);
if mod(i, 1000) == 0
fprintf("Epoch: %d, Predicted mu1: %.3f, Actual mu1: %.3f, Predicted mu2: %.3f, Actual mu2: %.3fn", …
i, extractdata(params.mu1), mu1Actual, extractdata(params.mu2), mu2Actual);
end
end
function [loss, grad] = modelLoss(t, x, params)
xpred = forward(params.net, t);
dxdt = dlgradient(sum(real(xpred)), t, ‘EnableHigherDerivatives’, true);
d2xdt2 = dlgradient(sum(dxdt), t);
% Modify the ODE residual based on your specific ODE
odeResidual = d2xdt2 – (params.mu1 * xpred + params.mu2 * xpred.^2);
% Compute the mean square error of the ODE residual
odeLoss = mean(odeResidual.^2);
% Compute the L2 difference between the predicted xpred and the true x.
dataLoss = l2loss(real(x), real(xpred)); % Ensure real part is used
% Sum the losses and take gradients
loss = odeLoss + dataLoss;
[grad.net, grad.mu1, grad.mu2] = dlgradient(loss, params.net.Learnables, params.mu1, params.mu2);
end
When I run the script no errors are reported, but the two parameters learned are not getting closer to the true values as the number of iterations increases:
I would like to know the reason for this situation and the corresponding solution, if you can help me to change the code I will be very grateful! deep learning, pinn, physics-informed nn MATLAB Answers — New Questions
Why Does dcm2angle Work Like This?
Suppose I have a direction cosine matrix, brought to my attention by a colleague
C = round(angle2dcm(-pi/2,-pi/2,0,’ZYX’))
Extract the angles with @doc:dcm2angle (Aerospace Toolbox) using the Default option
[a1,a2,a3] = dcm2angle(C,’ZYX’,’Default’);[a1,a2,a3]
Because the middle angle is -pi/2, the extracted angle triplets have multiple solutions, but the default result isn’t one of them.
Sometime after R2019b and before or at R2022a, an additonal optional argument can be specified, though I could find nothing about this new argument in the release notes nor the bug fixes.
[a1,a2,a3] = dcm2angle(C,’ZYX’,’Robust’);[a1,a2,a3]
Now we get a correct answer.
Trying with @doc:rotm2eul that’s used in other toolboxes (Robotics, Navigation, UAV) we see that it returns a correct result without any optional arguments.
eul = rotm2eul(C.’,’ZYX’)
round(angle2dcm(eul(1),eul(2),eul(3),’ZYX’))
@doc:dcm2angle with the Robust option actually computes two sets of angles, then computes the DCM with each set, and then compares the recomputed DCMs to the input DCM, and then returns the set of angles that result in a DCM that’s closest to the input. @doc:rotm2eul uses a different approach altogether, though is limited to only three axes sequences.
To be sure, the Default option with @doc:dcm2angle is considerably faster than Robust, but Robust appears to take the same amout of time as @doc:rotm2eul
timeit(@() dcm2angle(repmat(C,1,1,1e5),’ZYX’,’Default’),3)
timeit(@() dcm2angle(repmat(C,1,1,1e5),’ZYX’,’Robust’),3)
timeit(@() rotm2eul(repmat(C.’,1,1,1e5),’ZYX’),1)
Does anyone have a thought as to why dcm2angle is implemented as it is with Default and forces the user to use Robust? And why wouldn’t that same reason apply to rotm2eul?
Once MathWorks realized that dcm2angle Default was returning incorrect results in some cases, why patch it with Robust and keep Default instead of just fixing the bug?Suppose I have a direction cosine matrix, brought to my attention by a colleague
C = round(angle2dcm(-pi/2,-pi/2,0,’ZYX’))
Extract the angles with @doc:dcm2angle (Aerospace Toolbox) using the Default option
[a1,a2,a3] = dcm2angle(C,’ZYX’,’Default’);[a1,a2,a3]
Because the middle angle is -pi/2, the extracted angle triplets have multiple solutions, but the default result isn’t one of them.
Sometime after R2019b and before or at R2022a, an additonal optional argument can be specified, though I could find nothing about this new argument in the release notes nor the bug fixes.
[a1,a2,a3] = dcm2angle(C,’ZYX’,’Robust’);[a1,a2,a3]
Now we get a correct answer.
Trying with @doc:rotm2eul that’s used in other toolboxes (Robotics, Navigation, UAV) we see that it returns a correct result without any optional arguments.
eul = rotm2eul(C.’,’ZYX’)
round(angle2dcm(eul(1),eul(2),eul(3),’ZYX’))
@doc:dcm2angle with the Robust option actually computes two sets of angles, then computes the DCM with each set, and then compares the recomputed DCMs to the input DCM, and then returns the set of angles that result in a DCM that’s closest to the input. @doc:rotm2eul uses a different approach altogether, though is limited to only three axes sequences.
To be sure, the Default option with @doc:dcm2angle is considerably faster than Robust, but Robust appears to take the same amout of time as @doc:rotm2eul
timeit(@() dcm2angle(repmat(C,1,1,1e5),’ZYX’,’Default’),3)
timeit(@() dcm2angle(repmat(C,1,1,1e5),’ZYX’,’Robust’),3)
timeit(@() rotm2eul(repmat(C.’,1,1,1e5),’ZYX’),1)
Does anyone have a thought as to why dcm2angle is implemented as it is with Default and forces the user to use Robust? And why wouldn’t that same reason apply to rotm2eul?
Once MathWorks realized that dcm2angle Default was returning incorrect results in some cases, why patch it with Robust and keep Default instead of just fixing the bug? Suppose I have a direction cosine matrix, brought to my attention by a colleague
C = round(angle2dcm(-pi/2,-pi/2,0,’ZYX’))
Extract the angles with @doc:dcm2angle (Aerospace Toolbox) using the Default option
[a1,a2,a3] = dcm2angle(C,’ZYX’,’Default’);[a1,a2,a3]
Because the middle angle is -pi/2, the extracted angle triplets have multiple solutions, but the default result isn’t one of them.
Sometime after R2019b and before or at R2022a, an additonal optional argument can be specified, though I could find nothing about this new argument in the release notes nor the bug fixes.
[a1,a2,a3] = dcm2angle(C,’ZYX’,’Robust’);[a1,a2,a3]
Now we get a correct answer.
Trying with @doc:rotm2eul that’s used in other toolboxes (Robotics, Navigation, UAV) we see that it returns a correct result without any optional arguments.
eul = rotm2eul(C.’,’ZYX’)
round(angle2dcm(eul(1),eul(2),eul(3),’ZYX’))
@doc:dcm2angle with the Robust option actually computes two sets of angles, then computes the DCM with each set, and then compares the recomputed DCMs to the input DCM, and then returns the set of angles that result in a DCM that’s closest to the input. @doc:rotm2eul uses a different approach altogether, though is limited to only three axes sequences.
To be sure, the Default option with @doc:dcm2angle is considerably faster than Robust, but Robust appears to take the same amout of time as @doc:rotm2eul
timeit(@() dcm2angle(repmat(C,1,1,1e5),’ZYX’,’Default’),3)
timeit(@() dcm2angle(repmat(C,1,1,1e5),’ZYX’,’Robust’),3)
timeit(@() rotm2eul(repmat(C.’,1,1,1e5),’ZYX’),1)
Does anyone have a thought as to why dcm2angle is implemented as it is with Default and forces the user to use Robust? And why wouldn’t that same reason apply to rotm2eul?
Once MathWorks realized that dcm2angle Default was returning incorrect results in some cases, why patch it with Robust and keep Default instead of just fixing the bug? dcm2angle, rotm2eul MATLAB Answers — New Questions
How do I download .csv dataset directly from Kaggle with Matlab code.
Hello, I found a dataset on Kaggle which I want to use for my Machine learning project. However, I’m required to download the dataset from the code itself. I tried this:
url1 = ‘https://www.kaggle.com/competitions/mrs-spring-2023-battery-prediction-challenge/data?select=example_submission.csv’;
url2 = ‘https://www.kaggle.com/competitions/mrs-spring-2023-battery-prediction-challenge/data?select=test_data.csv’;
url3 = ‘https://www.kaggle.com/competitions/mrs-spring-2023-battery-prediction-challenge/data?select=training_data.csv’;
example = websave(‘example_submission.csv’,url1);
test = websave(‘test_data.csv’,url2);
training = websave(‘training_data.csv’,url3);
data_example = readtable("example_submission.csv");
data_test = readtable("test_data.csv");
data_train = readtable("training_data.csv");
I was able to download them, however instead of downloading the .csv files, it is downloading an HTML page that contains the Kaggle website.Hello, I found a dataset on Kaggle which I want to use for my Machine learning project. However, I’m required to download the dataset from the code itself. I tried this:
url1 = ‘https://www.kaggle.com/competitions/mrs-spring-2023-battery-prediction-challenge/data?select=example_submission.csv’;
url2 = ‘https://www.kaggle.com/competitions/mrs-spring-2023-battery-prediction-challenge/data?select=test_data.csv’;
url3 = ‘https://www.kaggle.com/competitions/mrs-spring-2023-battery-prediction-challenge/data?select=training_data.csv’;
example = websave(‘example_submission.csv’,url1);
test = websave(‘test_data.csv’,url2);
training = websave(‘training_data.csv’,url3);
data_example = readtable("example_submission.csv");
data_test = readtable("test_data.csv");
data_train = readtable("training_data.csv");
I was able to download them, however instead of downloading the .csv files, it is downloading an HTML page that contains the Kaggle website. Hello, I found a dataset on Kaggle which I want to use for my Machine learning project. However, I’m required to download the dataset from the code itself. I tried this:
url1 = ‘https://www.kaggle.com/competitions/mrs-spring-2023-battery-prediction-challenge/data?select=example_submission.csv’;
url2 = ‘https://www.kaggle.com/competitions/mrs-spring-2023-battery-prediction-challenge/data?select=test_data.csv’;
url3 = ‘https://www.kaggle.com/competitions/mrs-spring-2023-battery-prediction-challenge/data?select=training_data.csv’;
example = websave(‘example_submission.csv’,url1);
test = websave(‘test_data.csv’,url2);
training = websave(‘training_data.csv’,url3);
data_example = readtable("example_submission.csv");
data_test = readtable("test_data.csv");
data_train = readtable("training_data.csv");
I was able to download them, however instead of downloading the .csv files, it is downloading an HTML page that contains the Kaggle website. import, database, help MATLAB Answers — New Questions
Use fixed colormap or colorbar scale for series of 3D bar graphs in video animation
I’ve been able to incorporate code from this answer:
https://www.mathworks.com/matlabcentral/answers/98236-how-can-i-color-bars-to-correspond-to-their-heights-when-using-bar3
…in order to set uniform colors to bars in bar3() plots based on height. I would like the color scale to be fixed (10=burgendy, 0=navy) throughout all plots; however, it seems to be relative based on the max/min of each one. I tried setting c.TicksLimit=[0 10] but that didn’t work. It’s quite possible there’s an entirely better/different approach for this 3D visualization. I am open to a changeup but also would appreciate the opportunity to better undestand bar3() and figure properties. Thank you for your time and assistance.
Here’s the current animation produced from the code below to help visualize. Ideally, the initial condition would be all burgendy bars, and the colorbar would be fixed all through the animation from 0 to 10.
function animator(frames)
filename = ‘animation.mp4’;
v=VideoWriter(filename,’MPEG-4′);
v.FrameRate=2;
open(v);
for i1=1:length(frames)
active=frames(:,:,i1);
b=bar3(active);
colormap(turbo(10)); %% Perhaps something ‘smart’ happens here instead of fixed turbo(10)
% to set the colormap based on relative max/min?
set(gca,’ZLim’,[0 10]);
% Copied from above link and properly colors all bars to same color
% based on height
numBars=size(active,1);
numSets=size(active,2);
for i2=1:numSets
zdata=ones(6*numBars,4);
k=1;
for j=0:6:(6*numBars-6)
zdata(j+1:j+6,:)=active(k,i2);
k=k+1;
end
set(b(i2),’Cdata’,zdata)
end
c=colorbar;
c.Ticks=0:10;
% Or maybe something happens here to fix the colorbar scale?
exportgraphics(gca,’temp.png’);
im=imread(‘temp.png’);
im=imresize(im,[560 730]);
writeVideo(v,im);
end
endI’ve been able to incorporate code from this answer:
https://www.mathworks.com/matlabcentral/answers/98236-how-can-i-color-bars-to-correspond-to-their-heights-when-using-bar3
…in order to set uniform colors to bars in bar3() plots based on height. I would like the color scale to be fixed (10=burgendy, 0=navy) throughout all plots; however, it seems to be relative based on the max/min of each one. I tried setting c.TicksLimit=[0 10] but that didn’t work. It’s quite possible there’s an entirely better/different approach for this 3D visualization. I am open to a changeup but also would appreciate the opportunity to better undestand bar3() and figure properties. Thank you for your time and assistance.
Here’s the current animation produced from the code below to help visualize. Ideally, the initial condition would be all burgendy bars, and the colorbar would be fixed all through the animation from 0 to 10.
function animator(frames)
filename = ‘animation.mp4’;
v=VideoWriter(filename,’MPEG-4′);
v.FrameRate=2;
open(v);
for i1=1:length(frames)
active=frames(:,:,i1);
b=bar3(active);
colormap(turbo(10)); %% Perhaps something ‘smart’ happens here instead of fixed turbo(10)
% to set the colormap based on relative max/min?
set(gca,’ZLim’,[0 10]);
% Copied from above link and properly colors all bars to same color
% based on height
numBars=size(active,1);
numSets=size(active,2);
for i2=1:numSets
zdata=ones(6*numBars,4);
k=1;
for j=0:6:(6*numBars-6)
zdata(j+1:j+6,:)=active(k,i2);
k=k+1;
end
set(b(i2),’Cdata’,zdata)
end
c=colorbar;
c.Ticks=0:10;
% Or maybe something happens here to fix the colorbar scale?
exportgraphics(gca,’temp.png’);
im=imread(‘temp.png’);
im=imresize(im,[560 730]);
writeVideo(v,im);
end
end I’ve been able to incorporate code from this answer:
https://www.mathworks.com/matlabcentral/answers/98236-how-can-i-color-bars-to-correspond-to-their-heights-when-using-bar3
…in order to set uniform colors to bars in bar3() plots based on height. I would like the color scale to be fixed (10=burgendy, 0=navy) throughout all plots; however, it seems to be relative based on the max/min of each one. I tried setting c.TicksLimit=[0 10] but that didn’t work. It’s quite possible there’s an entirely better/different approach for this 3D visualization. I am open to a changeup but also would appreciate the opportunity to better undestand bar3() and figure properties. Thank you for your time and assistance.
Here’s the current animation produced from the code below to help visualize. Ideally, the initial condition would be all burgendy bars, and the colorbar would be fixed all through the animation from 0 to 10.
function animator(frames)
filename = ‘animation.mp4’;
v=VideoWriter(filename,’MPEG-4′);
v.FrameRate=2;
open(v);
for i1=1:length(frames)
active=frames(:,:,i1);
b=bar3(active);
colormap(turbo(10)); %% Perhaps something ‘smart’ happens here instead of fixed turbo(10)
% to set the colormap based on relative max/min?
set(gca,’ZLim’,[0 10]);
% Copied from above link and properly colors all bars to same color
% based on height
numBars=size(active,1);
numSets=size(active,2);
for i2=1:numSets
zdata=ones(6*numBars,4);
k=1;
for j=0:6:(6*numBars-6)
zdata(j+1:j+6,:)=active(k,i2);
k=k+1;
end
set(b(i2),’Cdata’,zdata)
end
c=colorbar;
c.Ticks=0:10;
% Or maybe something happens here to fix the colorbar scale?
exportgraphics(gca,’temp.png’);
im=imread(‘temp.png’);
im=imresize(im,[560 730]);
writeVideo(v,im);
end
end 3d plots, colormap MATLAB Answers — New Questions
i want a voltage graph that shows a charging curve from time 0 to 0.08 seconds, followed by a constant voltage from 0.08 to 0.1 seconds. kindly guide me
I want a voltage graph that shows a charging curve from time 0 to 0.08 seconds, followed by a constant voltage from 0.08 to 0.1 seconds. kindly guide meI want a voltage graph that shows a charging curve from time 0 to 0.08 seconds, followed by a constant voltage from 0.08 to 0.1 seconds. kindly guide me I want a voltage graph that shows a charging curve from time 0 to 0.08 seconds, followed by a constant voltage from 0.08 to 0.1 seconds. kindly guide me simulink, matlab MATLAB Answers — New Questions
How to do the filtered back projection?
Dear all,
I want to do the filtered back projection. Here is my code:
First, I created the phantom:
%create the phantom
Z = zeros(99); % create square matrix of zeroes
origin = [round((size(Z,2)-1)/2+1) round((size(Z,1)-1)/2+1)]; % "center" of the matrix
radius = round(sqrt(numel(Z)/(2*pi))); % radius for a circle that fills half the area of the matrix
[xx,yy] = meshgrid((1:size(Z,2))-origin(1),(1:size(Z,1))-origin(2)); % create x and y grid
Z(sqrt(xx.^2 + yy.^2) <= radius) = 1; % set points inside the radius equal to one
imshow(Z); % show the "image"
Second, I transform to frequency domain:
j = fftshift(fft2(Z));
figure, imshow(j)
j1 = log(1+abs(j));
figure ,imshow(j1)
j2 = bar(j1);
Third, should be I multiply my frequency domain with my Ramp Filter. My Ramp filter as here:
% Define parameters
N = 512; % Number of points in the filter
fs = 1000; % Sampling freq. in Hz
f = fs * (-N/2:N/2-1)/N; % Freq. vector
% Creating the ramp filter in the freq. domain
rampFilter = abs(f);
% Plot
figure;
plot(f, rampFilter);
title(‘Ramp Filter’);
xlabel(‘Frequency (Hz)’);
ylabel(‘Amplitude’);
grid on;
But, my problem is I dont know how to multiply my frequency domain with Ramp Filter, as the step number 2 in picture below.
ANYONE CAN HELP ME?Dear all,
I want to do the filtered back projection. Here is my code:
First, I created the phantom:
%create the phantom
Z = zeros(99); % create square matrix of zeroes
origin = [round((size(Z,2)-1)/2+1) round((size(Z,1)-1)/2+1)]; % "center" of the matrix
radius = round(sqrt(numel(Z)/(2*pi))); % radius for a circle that fills half the area of the matrix
[xx,yy] = meshgrid((1:size(Z,2))-origin(1),(1:size(Z,1))-origin(2)); % create x and y grid
Z(sqrt(xx.^2 + yy.^2) <= radius) = 1; % set points inside the radius equal to one
imshow(Z); % show the "image"
Second, I transform to frequency domain:
j = fftshift(fft2(Z));
figure, imshow(j)
j1 = log(1+abs(j));
figure ,imshow(j1)
j2 = bar(j1);
Third, should be I multiply my frequency domain with my Ramp Filter. My Ramp filter as here:
% Define parameters
N = 512; % Number of points in the filter
fs = 1000; % Sampling freq. in Hz
f = fs * (-N/2:N/2-1)/N; % Freq. vector
% Creating the ramp filter in the freq. domain
rampFilter = abs(f);
% Plot
figure;
plot(f, rampFilter);
title(‘Ramp Filter’);
xlabel(‘Frequency (Hz)’);
ylabel(‘Amplitude’);
grid on;
But, my problem is I dont know how to multiply my frequency domain with Ramp Filter, as the step number 2 in picture below.
ANYONE CAN HELP ME? Dear all,
I want to do the filtered back projection. Here is my code:
First, I created the phantom:
%create the phantom
Z = zeros(99); % create square matrix of zeroes
origin = [round((size(Z,2)-1)/2+1) round((size(Z,1)-1)/2+1)]; % "center" of the matrix
radius = round(sqrt(numel(Z)/(2*pi))); % radius for a circle that fills half the area of the matrix
[xx,yy] = meshgrid((1:size(Z,2))-origin(1),(1:size(Z,1))-origin(2)); % create x and y grid
Z(sqrt(xx.^2 + yy.^2) <= radius) = 1; % set points inside the radius equal to one
imshow(Z); % show the "image"
Second, I transform to frequency domain:
j = fftshift(fft2(Z));
figure, imshow(j)
j1 = log(1+abs(j));
figure ,imshow(j1)
j2 = bar(j1);
Third, should be I multiply my frequency domain with my Ramp Filter. My Ramp filter as here:
% Define parameters
N = 512; % Number of points in the filter
fs = 1000; % Sampling freq. in Hz
f = fs * (-N/2:N/2-1)/N; % Freq. vector
% Creating the ramp filter in the freq. domain
rampFilter = abs(f);
% Plot
figure;
plot(f, rampFilter);
title(‘Ramp Filter’);
xlabel(‘Frequency (Hz)’);
ylabel(‘Amplitude’);
grid on;
But, my problem is I dont know how to multiply my frequency domain with Ramp Filter, as the step number 2 in picture below.
ANYONE CAN HELP ME? image analysis, image processing, image acquisition, image segmentation, digital image processing, image MATLAB Answers — New Questions
Extract numeric values from a colormap
Hello everyone
I am trying to extract the numerical values of a variable represented in the following colormap with its respective colorbar (which I also attach in PDF format):
It would be very desirable to be able to create a database based on the spatial deviation from the center of the hexagon along the directions defined by the a1 and a2 vectors as a fraction of a0, being a0 the modulus of the aforementioned vectos:
Any idea?Hello everyone
I am trying to extract the numerical values of a variable represented in the following colormap with its respective colorbar (which I also attach in PDF format):
It would be very desirable to be able to create a database based on the spatial deviation from the center of the hexagon along the directions defined by the a1 and a2 vectors as a fraction of a0, being a0 the modulus of the aforementioned vectos:
Any idea? Hello everyone
I am trying to extract the numerical values of a variable represented in the following colormap with its respective colorbar (which I also attach in PDF format):
It would be very desirable to be able to create a database based on the spatial deviation from the center of the hexagon along the directions defined by the a1 and a2 vectors as a fraction of a0, being a0 the modulus of the aforementioned vectos:
Any idea? colormap, visual data MATLAB Answers — New Questions
How the extract EEG data according to different epoch limits?
Hey there! Currently I got an EEG data with events like "2", "4", "6","8" which mean the outset of different audio stimui. There are 20 kinds of specific audio stimuli with some similar features for each kind of event/trigger. I have the stimuli matrix generated in each experiment which is in the actual stimulating order. so the specific stimulus for each event could be targeted. In each stimulus, there is 1s silence at the begining and and the end and duration of actual voice is from 1s to 2s. I can use the detectSpeech function to find the outset and end of the voice.
Now I want to extract the EEG data with actual audio stimuli and delete the silence areas, which couldn’t be realized by exracting epochs with specific limits(start and end). could you please tell me how to use script to get the EEG epochs with different length?
Thank you very much!Hey there! Currently I got an EEG data with events like "2", "4", "6","8" which mean the outset of different audio stimui. There are 20 kinds of specific audio stimuli with some similar features for each kind of event/trigger. I have the stimuli matrix generated in each experiment which is in the actual stimulating order. so the specific stimulus for each event could be targeted. In each stimulus, there is 1s silence at the begining and and the end and duration of actual voice is from 1s to 2s. I can use the detectSpeech function to find the outset and end of the voice.
Now I want to extract the EEG data with actual audio stimuli and delete the silence areas, which couldn’t be realized by exracting epochs with specific limits(start and end). could you please tell me how to use script to get the EEG epochs with different length?
Thank you very much! Hey there! Currently I got an EEG data with events like "2", "4", "6","8" which mean the outset of different audio stimui. There are 20 kinds of specific audio stimuli with some similar features for each kind of event/trigger. I have the stimuli matrix generated in each experiment which is in the actual stimulating order. so the specific stimulus for each event could be targeted. In each stimulus, there is 1s silence at the begining and and the end and duration of actual voice is from 1s to 2s. I can use the detectSpeech function to find the outset and end of the voice.
Now I want to extract the EEG data with actual audio stimuli and delete the silence areas, which couldn’t be realized by exracting epochs with specific limits(start and end). could you please tell me how to use script to get the EEG epochs with different length?
Thank you very much! eeg, matlab, epoch, extract, audio stimui, event related potential MATLAB Answers — New Questions
How to define line code to block Frequency EEG based on specific range?
I am new on mathalab and programmation, I am loocking for a base script to lock specific range of EEG frequency.
I would like :
to create a script able to detect the frequency from 0 to 20 Hz.
to block some of this frquency.
Thank you for help.I am new on mathalab and programmation, I am loocking for a base script to lock specific range of EEG frequency.
I would like :
to create a script able to detect the frequency from 0 to 20 Hz.
to block some of this frquency.
Thank you for help. I am new on mathalab and programmation, I am loocking for a base script to lock specific range of EEG frequency.
I would like :
to create a script able to detect the frequency from 0 to 20 Hz.
to block some of this frquency.
Thank you for help. eeg, frequency MATLAB Answers — New Questions