How to export INT8 quantized weight of deep neural network?
I trained neural network using Deep Learning Toolbox, and quantized it.
Below code is what I used to INT8 quantize network model.
% Create a dlquantizer object for quantization
quantObj = dlquantizer(net);
% quantOpts = dlquantizationOptions(target=’host’);
calibrate(quantObj,imdsTrain);
% valResults = validate(quantObj, imdsValidation, quantOpts);
% valResults.Statistics
% Perform quantization
quantObj = quantize(quantObj);
qDetailsQuantized = quantizationDetails(quantObj)
% Save the quantized network
save(‘quantizedNet.mat’, ‘quantObj’);
exportONNXNetwork(quantObj,’quantizedNet.onnx’)
After quantization, I got quantized network quantObj .
However, I cannot access weight and bias which coverted to INT8 format.
When I display quantized networks’ weight and bias using bwloe code,
>> disp(quantObj.Layers(2).Bias(:,:,1))
-6.9011793e-12
It still shows float type value.
Even I tried to export network as ONNX, MATLAB shows below warning,
>> exportONNXNetwork(quantObj,’quantizedNet.onnx’)
Warning: Exported weights are not quantized when exporting quantized networks.
How can I access INT8 quantized weight and bias value?I trained neural network using Deep Learning Toolbox, and quantized it.
Below code is what I used to INT8 quantize network model.
% Create a dlquantizer object for quantization
quantObj = dlquantizer(net);
% quantOpts = dlquantizationOptions(target=’host’);
calibrate(quantObj,imdsTrain);
% valResults = validate(quantObj, imdsValidation, quantOpts);
% valResults.Statistics
% Perform quantization
quantObj = quantize(quantObj);
qDetailsQuantized = quantizationDetails(quantObj)
% Save the quantized network
save(‘quantizedNet.mat’, ‘quantObj’);
exportONNXNetwork(quantObj,’quantizedNet.onnx’)
After quantization, I got quantized network quantObj .
However, I cannot access weight and bias which coverted to INT8 format.
When I display quantized networks’ weight and bias using bwloe code,
>> disp(quantObj.Layers(2).Bias(:,:,1))
-6.9011793e-12
It still shows float type value.
Even I tried to export network as ONNX, MATLAB shows below warning,
>> exportONNXNetwork(quantObj,’quantizedNet.onnx’)
Warning: Exported weights are not quantized when exporting quantized networks.
How can I access INT8 quantized weight and bias value? I trained neural network using Deep Learning Toolbox, and quantized it.
Below code is what I used to INT8 quantize network model.
% Create a dlquantizer object for quantization
quantObj = dlquantizer(net);
% quantOpts = dlquantizationOptions(target=’host’);
calibrate(quantObj,imdsTrain);
% valResults = validate(quantObj, imdsValidation, quantOpts);
% valResults.Statistics
% Perform quantization
quantObj = quantize(quantObj);
qDetailsQuantized = quantizationDetails(quantObj)
% Save the quantized network
save(‘quantizedNet.mat’, ‘quantObj’);
exportONNXNetwork(quantObj,’quantizedNet.onnx’)
After quantization, I got quantized network quantObj .
However, I cannot access weight and bias which coverted to INT8 format.
When I display quantized networks’ weight and bias using bwloe code,
>> disp(quantObj.Layers(2).Bias(:,:,1))
-6.9011793e-12
It still shows float type value.
Even I tried to export network as ONNX, MATLAB shows below warning,
>> exportONNXNetwork(quantObj,’quantizedNet.onnx’)
Warning: Exported weights are not quantized when exporting quantized networks.
How can I access INT8 quantized weight and bias value? deep learning, quantization, weight, neural network MATLAB Answers — New Questions