Incorrect use of dlarray/dlgradient
Hello, I am working on customizing the loss function to minimize dimensionality by maximizing the Bhattacharyya distance distance.
But it came up with a error shown as:
Incorrect use of dlarray/dlgradient
The requested value was not tracked. It must be a real dlarray scalar for tracking. Please use dlgradient to track variables in the function called by dlfestival.
% Parameter settings
M = 10; % Dimension of input features
N = 50; % Number of samples per class
numEpochs = 100;
learnRate = 0.01;
% Generate example data
X = rand(2*N, M);
X(1:N, ๐ = X(1:N, ๐ + 1; % Data for class A
X(N+1:end, ๐ = X(N+1:end, ๐ – 1; % Data for class B
% Define the neural network
layers = [
featureInputLayer(M, ‘Normalization’, ‘none’)
fullyConnectedLayer(10)
reluLayer
fullyConnectedLayer(3)
];
dlnet = dlnetwork(layerGraph(layers));
% Custom training loop
for epoch = 1:numEpochs
dlX = dlarray(X’, ‘CB’); % Transpose input data to match network’s expected format
[gradients, loss] = dlfeval(@modelGradients, dlnet, dlX, N);
dlnet = dlupdate(@sgdmupdate, dlnet, gradients, learnRate);
disp([‘Epoch ‘ num2str(epoch) ‘, Loss: ‘ num2str(extractdata(loss))]);
end
% Testing phase
X_test = rand(N, M); % Assume test data is randomly generated
dlX_test = dlarray(X_test’, ‘CB’); % Transpose input data to match network’s expected format
Y_test = predict(dlnet, dlX_test);
disp(‘Dimensionality reduction results during testing:’);
disp(extractdata(Y_test)’);
% Custom loss function
function loss = customLoss(Y, N)
YA = extractdata(Y(:, 1:N))’;
YB = extractdata(Y(:, N+1:end))’;
muA = mean(YA);
muB = mean(YB);
covA = cov(YA);
covB = cov(YB);
covMean = (covA + covB) / 2;
d = 0.25 * (muA – muB) / covMean * (muA – muB)’ + 0.5 * log(det(covMean) / sqrt(det(covA) * det(covB)));
loss = -d; % Maximize Bhattacharyya distance
loss = dlarray(loss); % Ensure loss is a tracked dlarray scalar
end
% Model gradient function
function [gradients, loss] = modelGradients(dlnet, dlX, N)
Y = forward(dlnet, dlX);
loss = customLoss(Y, N);
gradients = dlgradient(loss, dlnet.Learnables);
end
% Update function
function param = sgdmupdate(param, grad, learnRate)
param = param – learnRate * grad;
endHello, I am working on customizing the loss function to minimize dimensionality by maximizing the Bhattacharyya distance distance.
But it came up with a error shown as:
Incorrect use of dlarray/dlgradient
The requested value was not tracked. It must be a real dlarray scalar for tracking. Please use dlgradient to track variables in the function called by dlfestival.
% Parameter settings
M = 10; % Dimension of input features
N = 50; % Number of samples per class
numEpochs = 100;
learnRate = 0.01;
% Generate example data
X = rand(2*N, M);
X(1:N, ๐ = X(1:N, ๐ + 1; % Data for class A
X(N+1:end, ๐ = X(N+1:end, ๐ – 1; % Data for class B
% Define the neural network
layers = [
featureInputLayer(M, ‘Normalization’, ‘none’)
fullyConnectedLayer(10)
reluLayer
fullyConnectedLayer(3)
];
dlnet = dlnetwork(layerGraph(layers));
% Custom training loop
for epoch = 1:numEpochs
dlX = dlarray(X’, ‘CB’); % Transpose input data to match network’s expected format
[gradients, loss] = dlfeval(@modelGradients, dlnet, dlX, N);
dlnet = dlupdate(@sgdmupdate, dlnet, gradients, learnRate);
disp([‘Epoch ‘ num2str(epoch) ‘, Loss: ‘ num2str(extractdata(loss))]);
end
% Testing phase
X_test = rand(N, M); % Assume test data is randomly generated
dlX_test = dlarray(X_test’, ‘CB’); % Transpose input data to match network’s expected format
Y_test = predict(dlnet, dlX_test);
disp(‘Dimensionality reduction results during testing:’);
disp(extractdata(Y_test)’);
% Custom loss function
function loss = customLoss(Y, N)
YA = extractdata(Y(:, 1:N))’;
YB = extractdata(Y(:, N+1:end))’;
muA = mean(YA);
muB = mean(YB);
covA = cov(YA);
covB = cov(YB);
covMean = (covA + covB) / 2;
d = 0.25 * (muA – muB) / covMean * (muA – muB)’ + 0.5 * log(det(covMean) / sqrt(det(covA) * det(covB)));
loss = -d; % Maximize Bhattacharyya distance
loss = dlarray(loss); % Ensure loss is a tracked dlarray scalar
end
% Model gradient function
function [gradients, loss] = modelGradients(dlnet, dlX, N)
Y = forward(dlnet, dlX);
loss = customLoss(Y, N);
gradients = dlgradient(loss, dlnet.Learnables);
end
% Update function
function param = sgdmupdate(param, grad, learnRate)
param = param – learnRate * grad;
endย Hello, I am working on customizing the loss function to minimize dimensionality by maximizing the Bhattacharyya distance distance.
But it came up with a error shown as:
Incorrect use of dlarray/dlgradient
The requested value was not tracked. It must be a real dlarray scalar for tracking. Please use dlgradient to track variables in the function called by dlfestival.
% Parameter settings
M = 10; % Dimension of input features
N = 50; % Number of samples per class
numEpochs = 100;
learnRate = 0.01;
% Generate example data
X = rand(2*N, M);
X(1:N, ๐ = X(1:N, ๐ + 1; % Data for class A
X(N+1:end, ๐ = X(N+1:end, ๐ – 1; % Data for class B
% Define the neural network
layers = [
featureInputLayer(M, ‘Normalization’, ‘none’)
fullyConnectedLayer(10)
reluLayer
fullyConnectedLayer(3)
];
dlnet = dlnetwork(layerGraph(layers));
% Custom training loop
for epoch = 1:numEpochs
dlX = dlarray(X’, ‘CB’); % Transpose input data to match network’s expected format
[gradients, loss] = dlfeval(@modelGradients, dlnet, dlX, N);
dlnet = dlupdate(@sgdmupdate, dlnet, gradients, learnRate);
disp([‘Epoch ‘ num2str(epoch) ‘, Loss: ‘ num2str(extractdata(loss))]);
end
% Testing phase
X_test = rand(N, M); % Assume test data is randomly generated
dlX_test = dlarray(X_test’, ‘CB’); % Transpose input data to match network’s expected format
Y_test = predict(dlnet, dlX_test);
disp(‘Dimensionality reduction results during testing:’);
disp(extractdata(Y_test)’);
% Custom loss function
function loss = customLoss(Y, N)
YA = extractdata(Y(:, 1:N))’;
YB = extractdata(Y(:, N+1:end))’;
muA = mean(YA);
muB = mean(YB);
covA = cov(YA);
covB = cov(YB);
covMean = (covA + covB) / 2;
d = 0.25 * (muA – muB) / covMean * (muA – muB)’ + 0.5 * log(det(covMean) / sqrt(det(covA) * det(covB)));
loss = -d; % Maximize Bhattacharyya distance
loss = dlarray(loss); % Ensure loss is a tracked dlarray scalar
end
% Model gradient function
function [gradients, loss] = modelGradients(dlnet, dlX, N)
Y = forward(dlnet, dlX);
loss = customLoss(Y, N);
gradients = dlgradient(loss, dlnet.Learnables);
end
% Update function
function param = sgdmupdate(param, grad, learnRate)
param = param – learnRate * grad;
endย deep learningย MATLAB Answers โ New Questions
โ