timeout error when output of entry-point function is vector with PIL configuration
Greetings
I’m working on deploying deep learning on raspberry pi 4 with matlab version 2024a. I’m facing timeout error when output of entry-point function with PIL configuration is vector. however when configuration is mex, the code has successfully implemtned. Simple code below can demonstrate the issue.
Kindly, how the code with PIL configuration can be fixed so that the output from entry-point function is vector or matrix ?
the driver for this request is described in more details in my case study below.
% raspberry connection and configuration
r = raspi(‘raspberrypi’,’pi’,’raspberry’);
cfg = coder.config(‘lib’,’ecoder’,true);
cfg.VerificationMode = ‘PIL’;
cfg.TargetLang = ‘C++’;
dlcfg = coder.DeepLearningConfig(‘arm-compute’);
dlcfg.ArmComputeVersion = ‘20.02.1’;
dlcfg.ArmArchitecture = ‘armv7’;
cfg.DeepLearningConfig = dlcfg;
cfg.MATLABSourceComments = 1;
hw = coder.hardware(‘Raspberry Pi’);
cfg.Hardware = hw;
cfg.CodeExecutionProfiling = true;
Case 1: with mex configuration, successfuly implemented
type dummy
t = ones(1,3);
codegen dummy -args {t} -report -config:mex
% testing deployement
x=[1,2,3];
testDummy= dummy_mex(x);
Case 2: with pil configuraiton, timeout error generated
type dummy
cfg.Hardware.BuildDir = ‘~/dummy’;
t = ones(1,3);
codegen dummy -args {t} -report -config cfg
% testing deployement
x=[1,2,3];
testDummy= dummy_pil(x);
entry-point function
function out = dummy(in)
%#codegen
out = in;
end
Case Study
my case study has raw data consists of 8 channels where I need to extract features from wavelet scattering network then pass them into trained lstm network. My code is similar to the below example, but the difference is my input raw data is 8 channels, while the example is one channel. openExample(‘wavelet/CodeGenerationForFaultDetectionUsingWaveletAndRNNExample’).
I’m getting the below error:
Number of features of input passed to the predict method must be a code generation time constant
So, I decided to breakdown entry-point function into two functions for easier troubleshooting. First featureFunction is to extract wavelet features from raw data (batch: 500samples x 8 channels), the second predictFunction is to take the generated wavelet features (32 C x 102 T) into trained lstm network.
prdictFunction_pil worked find which accept input size (32 x 102), and get predicted labels.
featureFunction_pil gives the below error
Error using rtw.pil.SILPILInterface.throwMException (line 1774)
The timeout of 300 seconds for receiving data from the rtiostream interface has been exceeded. There might be multiple reasons for this communications failure.
You should:
(a) Check that the target hardware configuration is correct, for example, check that the byte ordering is correct.
(b) Confirm that the target application is running on the target hardware.
(c) Consider the possibility of application run-time failures (e.g. divide by zero exceptions, incorrect custom code integration, etc.).
Note (c): To identify possible reasons for the run-time failure, consider using SIL, which supports signal handlers and debugging.
If you cannot find a solution, consider using the method setTimeoutRecvSecs of rtw.connectivity.RtIOStreamHostCommunicator to increase the timeout value.Greetings
I’m working on deploying deep learning on raspberry pi 4 with matlab version 2024a. I’m facing timeout error when output of entry-point function with PIL configuration is vector. however when configuration is mex, the code has successfully implemtned. Simple code below can demonstrate the issue.
Kindly, how the code with PIL configuration can be fixed so that the output from entry-point function is vector or matrix ?
the driver for this request is described in more details in my case study below.
% raspberry connection and configuration
r = raspi(‘raspberrypi’,’pi’,’raspberry’);
cfg = coder.config(‘lib’,’ecoder’,true);
cfg.VerificationMode = ‘PIL’;
cfg.TargetLang = ‘C++’;
dlcfg = coder.DeepLearningConfig(‘arm-compute’);
dlcfg.ArmComputeVersion = ‘20.02.1’;
dlcfg.ArmArchitecture = ‘armv7’;
cfg.DeepLearningConfig = dlcfg;
cfg.MATLABSourceComments = 1;
hw = coder.hardware(‘Raspberry Pi’);
cfg.Hardware = hw;
cfg.CodeExecutionProfiling = true;
Case 1: with mex configuration, successfuly implemented
type dummy
t = ones(1,3);
codegen dummy -args {t} -report -config:mex
% testing deployement
x=[1,2,3];
testDummy= dummy_mex(x);
Case 2: with pil configuraiton, timeout error generated
type dummy
cfg.Hardware.BuildDir = ‘~/dummy’;
t = ones(1,3);
codegen dummy -args {t} -report -config cfg
% testing deployement
x=[1,2,3];
testDummy= dummy_pil(x);
entry-point function
function out = dummy(in)
%#codegen
out = in;
end
Case Study
my case study has raw data consists of 8 channels where I need to extract features from wavelet scattering network then pass them into trained lstm network. My code is similar to the below example, but the difference is my input raw data is 8 channels, while the example is one channel. openExample(‘wavelet/CodeGenerationForFaultDetectionUsingWaveletAndRNNExample’).
I’m getting the below error:
Number of features of input passed to the predict method must be a code generation time constant
So, I decided to breakdown entry-point function into two functions for easier troubleshooting. First featureFunction is to extract wavelet features from raw data (batch: 500samples x 8 channels), the second predictFunction is to take the generated wavelet features (32 C x 102 T) into trained lstm network.
prdictFunction_pil worked find which accept input size (32 x 102), and get predicted labels.
featureFunction_pil gives the below error
Error using rtw.pil.SILPILInterface.throwMException (line 1774)
The timeout of 300 seconds for receiving data from the rtiostream interface has been exceeded. There might be multiple reasons for this communications failure.
You should:
(a) Check that the target hardware configuration is correct, for example, check that the byte ordering is correct.
(b) Confirm that the target application is running on the target hardware.
(c) Consider the possibility of application run-time failures (e.g. divide by zero exceptions, incorrect custom code integration, etc.).
Note (c): To identify possible reasons for the run-time failure, consider using SIL, which supports signal handlers and debugging.
If you cannot find a solution, consider using the method setTimeoutRecvSecs of rtw.connectivity.RtIOStreamHostCommunicator to increase the timeout value. Greetings
I’m working on deploying deep learning on raspberry pi 4 with matlab version 2024a. I’m facing timeout error when output of entry-point function with PIL configuration is vector. however when configuration is mex, the code has successfully implemtned. Simple code below can demonstrate the issue.
Kindly, how the code with PIL configuration can be fixed so that the output from entry-point function is vector or matrix ?
the driver for this request is described in more details in my case study below.
% raspberry connection and configuration
r = raspi(‘raspberrypi’,’pi’,’raspberry’);
cfg = coder.config(‘lib’,’ecoder’,true);
cfg.VerificationMode = ‘PIL’;
cfg.TargetLang = ‘C++’;
dlcfg = coder.DeepLearningConfig(‘arm-compute’);
dlcfg.ArmComputeVersion = ‘20.02.1’;
dlcfg.ArmArchitecture = ‘armv7’;
cfg.DeepLearningConfig = dlcfg;
cfg.MATLABSourceComments = 1;
hw = coder.hardware(‘Raspberry Pi’);
cfg.Hardware = hw;
cfg.CodeExecutionProfiling = true;
Case 1: with mex configuration, successfuly implemented
type dummy
t = ones(1,3);
codegen dummy -args {t} -report -config:mex
% testing deployement
x=[1,2,3];
testDummy= dummy_mex(x);
Case 2: with pil configuraiton, timeout error generated
type dummy
cfg.Hardware.BuildDir = ‘~/dummy’;
t = ones(1,3);
codegen dummy -args {t} -report -config cfg
% testing deployement
x=[1,2,3];
testDummy= dummy_pil(x);
entry-point function
function out = dummy(in)
%#codegen
out = in;
end
Case Study
my case study has raw data consists of 8 channels where I need to extract features from wavelet scattering network then pass them into trained lstm network. My code is similar to the below example, but the difference is my input raw data is 8 channels, while the example is one channel. openExample(‘wavelet/CodeGenerationForFaultDetectionUsingWaveletAndRNNExample’).
I’m getting the below error:
Number of features of input passed to the predict method must be a code generation time constant
So, I decided to breakdown entry-point function into two functions for easier troubleshooting. First featureFunction is to extract wavelet features from raw data (batch: 500samples x 8 channels), the second predictFunction is to take the generated wavelet features (32 C x 102 T) into trained lstm network.
prdictFunction_pil worked find which accept input size (32 x 102), and get predicted labels.
featureFunction_pil gives the below error
Error using rtw.pil.SILPILInterface.throwMException (line 1774)
The timeout of 300 seconds for receiving data from the rtiostream interface has been exceeded. There might be multiple reasons for this communications failure.
You should:
(a) Check that the target hardware configuration is correct, for example, check that the byte ordering is correct.
(b) Confirm that the target application is running on the target hardware.
(c) Consider the possibility of application run-time failures (e.g. divide by zero exceptions, incorrect custom code integration, etc.).
Note (c): To identify possible reasons for the run-time failure, consider using SIL, which supports signal handlers and debugging.
If you cannot find a solution, consider using the method setTimeoutRecvSecs of rtw.connectivity.RtIOStreamHostCommunicator to increase the timeout value. code generation, matlab coder, raspberry, deployment MATLAB Answers — New Questions