Looking for linear system Ax=0 precision improvements
Hi, I’m solving a system of equations .
x=null(A);
The problem is that due to a very wide range of values in the A matrix () the roundoff error result in inacceptably large relative errors in the solution. How can I improve this numerical resolution? The matrix is relatively small (30×30) so speed is not a big limitation. Also all the large values are on the same colums.
Edit2: clarifying the question and objectives
To clarify my issue, my understanding is that null sort of ends up minimising .
Its limited double precision and even if you plug in the exact solution, i can only hope to have about
A*x <= length(x)*eps(max(A.*x’))
When i plot A*x, I get something closer to
A*x = length(x)*eps(max(A.*x’,[],’all’))
This is probably to be expected as minimisation of "distributes the error" over all components equally?
In my case, I care about relative errors of individual coordinates . I want them to be small and at least know when a value is smaller than engine precision.
Can I be more precise in calculating the small value at the cost of the larger ones if necessary, and how up to what threshold I can trust this value?
A good improvement was proposed:
s = max(abs(A),[],1); %rescaling
x = null(A./s)./(s’);
This is better but not quite sufficient and I stil miss the key aspect of knowing what value is below reasonaly accurate. I initially used x>eps(1) to check for that. This was wrong but my following attemps were not any better.
—– Edit: Correcting the question following Matt’s advice. —–
I benchmark the relative error over each element of the solution , with .
relative_error = abs((A*x)./(vecnorm(A)’.*x));
I have a kinda dirty solution using a combination of rescaling, null(), and fsolve(). You can see below the result for four different approaches.
x1 = null(A);
x1 = x1/sum(x1(:,1));
s = max(abs(A),[],1); %rescaling
x2 = null(A./s)./(s’);
x2 = x2/sum(x2(:,1));
x3 = fsolve(@(y) A*y,x2,optimoptions(‘fsolve’,’Display’,’off’));
x3=x3/sum(x3);
precision = length(A)*eps(max(abs(A.*(x2′)),[],2));
x4 = fsolve(@(y) A*y./max(abs(x2),precision),x2,optimoptions(‘fsolve’,’Display’,’off’));
x4 = x4/sum(x4);
figure
hold on
plot(x4,’k–‘)
relative_error = abs((A*x1)./(vecnorm(A)’.*x1));
plot(relative_error)
relative_error = abs((A*x2)./(vecnorm(A)’.*x2));
plot(relative_error)
relative_error = abs((A*x3)./(vecnorm(A)’.*x3));
plot(relative_error)
relative_error = abs((A*x4)./(vecnorm(A)’.*x4));
plot(relative_error)
set(gca,’Yscale’,’log’)
legend({‘x_4′,’err 1′,’err 2′,’err 3′,’err 4’})
The method is probably a bit slow and dirty but the relative error reached is close to eps.Hi, I’m solving a system of equations .
x=null(A);
The problem is that due to a very wide range of values in the A matrix () the roundoff error result in inacceptably large relative errors in the solution. How can I improve this numerical resolution? The matrix is relatively small (30×30) so speed is not a big limitation. Also all the large values are on the same colums.
Edit2: clarifying the question and objectives
To clarify my issue, my understanding is that null sort of ends up minimising .
Its limited double precision and even if you plug in the exact solution, i can only hope to have about
A*x <= length(x)*eps(max(A.*x’))
When i plot A*x, I get something closer to
A*x = length(x)*eps(max(A.*x’,[],’all’))
This is probably to be expected as minimisation of "distributes the error" over all components equally?
In my case, I care about relative errors of individual coordinates . I want them to be small and at least know when a value is smaller than engine precision.
Can I be more precise in calculating the small value at the cost of the larger ones if necessary, and how up to what threshold I can trust this value?
A good improvement was proposed:
s = max(abs(A),[],1); %rescaling
x = null(A./s)./(s’);
This is better but not quite sufficient and I stil miss the key aspect of knowing what value is below reasonaly accurate. I initially used x>eps(1) to check for that. This was wrong but my following attemps were not any better.
—– Edit: Correcting the question following Matt’s advice. —–
I benchmark the relative error over each element of the solution , with .
relative_error = abs((A*x)./(vecnorm(A)’.*x));
I have a kinda dirty solution using a combination of rescaling, null(), and fsolve(). You can see below the result for four different approaches.
x1 = null(A);
x1 = x1/sum(x1(:,1));
s = max(abs(A),[],1); %rescaling
x2 = null(A./s)./(s’);
x2 = x2/sum(x2(:,1));
x3 = fsolve(@(y) A*y,x2,optimoptions(‘fsolve’,’Display’,’off’));
x3=x3/sum(x3);
precision = length(A)*eps(max(abs(A.*(x2′)),[],2));
x4 = fsolve(@(y) A*y./max(abs(x2),precision),x2,optimoptions(‘fsolve’,’Display’,’off’));
x4 = x4/sum(x4);
figure
hold on
plot(x4,’k–‘)
relative_error = abs((A*x1)./(vecnorm(A)’.*x1));
plot(relative_error)
relative_error = abs((A*x2)./(vecnorm(A)’.*x2));
plot(relative_error)
relative_error = abs((A*x3)./(vecnorm(A)’.*x3));
plot(relative_error)
relative_error = abs((A*x4)./(vecnorm(A)’.*x4));
plot(relative_error)
set(gca,’Yscale’,’log’)
legend({‘x_4′,’err 1′,’err 2′,’err 3′,’err 4’})
The method is probably a bit slow and dirty but the relative error reached is close to eps. Hi, I’m solving a system of equations .
x=null(A);
The problem is that due to a very wide range of values in the A matrix () the roundoff error result in inacceptably large relative errors in the solution. How can I improve this numerical resolution? The matrix is relatively small (30×30) so speed is not a big limitation. Also all the large values are on the same colums.
Edit2: clarifying the question and objectives
To clarify my issue, my understanding is that null sort of ends up minimising .
Its limited double precision and even if you plug in the exact solution, i can only hope to have about
A*x <= length(x)*eps(max(A.*x’))
When i plot A*x, I get something closer to
A*x = length(x)*eps(max(A.*x’,[],’all’))
This is probably to be expected as minimisation of "distributes the error" over all components equally?
In my case, I care about relative errors of individual coordinates . I want them to be small and at least know when a value is smaller than engine precision.
Can I be more precise in calculating the small value at the cost of the larger ones if necessary, and how up to what threshold I can trust this value?
A good improvement was proposed:
s = max(abs(A),[],1); %rescaling
x = null(A./s)./(s’);
This is better but not quite sufficient and I stil miss the key aspect of knowing what value is below reasonaly accurate. I initially used x>eps(1) to check for that. This was wrong but my following attemps were not any better.
—– Edit: Correcting the question following Matt’s advice. —–
I benchmark the relative error over each element of the solution , with .
relative_error = abs((A*x)./(vecnorm(A)’.*x));
I have a kinda dirty solution using a combination of rescaling, null(), and fsolve(). You can see below the result for four different approaches.
x1 = null(A);
x1 = x1/sum(x1(:,1));
s = max(abs(A),[],1); %rescaling
x2 = null(A./s)./(s’);
x2 = x2/sum(x2(:,1));
x3 = fsolve(@(y) A*y,x2,optimoptions(‘fsolve’,’Display’,’off’));
x3=x3/sum(x3);
precision = length(A)*eps(max(abs(A.*(x2′)),[],2));
x4 = fsolve(@(y) A*y./max(abs(x2),precision),x2,optimoptions(‘fsolve’,’Display’,’off’));
x4 = x4/sum(x4);
figure
hold on
plot(x4,’k–‘)
relative_error = abs((A*x1)./(vecnorm(A)’.*x1));
plot(relative_error)
relative_error = abs((A*x2)./(vecnorm(A)’.*x2));
plot(relative_error)
relative_error = abs((A*x3)./(vecnorm(A)’.*x3));
plot(relative_error)
relative_error = abs((A*x4)./(vecnorm(A)’.*x4));
plot(relative_error)
set(gca,’Yscale’,’log’)
legend({‘x_4′,’err 1′,’err 2′,’err 3′,’err 4’})
The method is probably a bit slow and dirty but the relative error reached is close to eps. equation, roundoff MATLAB Answers — New Questions