## Speed of the abs function

Hello everybody!

I had a question about how matlab computed the absolute value function.

Indeed, i was working on a school project and had to optimize a code that returned the smallest euclidian distance from the origin of a 3-D cloud of points. Simple enough, my first thought was to use the basic :

d = x.^2 + y.^2 + z.^2;

dmin = sqrt(min(d));

But then, thinking that squaring was a expensive operation, especially on a data set that can be bilions of points big, i tried to implement a "pre-check". My idea was the following: the 1-norm will be way cheaper to compute and so i will be able to compute a first "estimate" of the smallest distance very quickly. I will then use this to only compute the 2-norm of the points that are "close enough" to the origin on a way smaller data set.

so i did this:

dEstimate = abs(x) + abs(y) + abs(z);

dMinEstimate = min(dEstimate);

threshold = dMinEstimate*sqrt(3); %the closest point in the 2-norm can be at most sqrt(3) times further than the closest in the 1-norm.

potentialMin = find(dEstimate < threshold);

d = x(potentialMin).^2 + y(potentialMin).^2 + y(potentialMin).^2

dmin = sqrt(min(d));

But to my surprise, when using the profile viewer to compare the codes, this is what i got:

As you can see, the results are averaged over 100 calls in order to reduce processing noise. Keep in mind that the data is one milion points here.

How can the square operation time be close or even faster than the absolute value?

I would assume that the absolute value is computed by setting to 0 the sign bit in the IEEE 754 double convention, which is a very simple operation while the squaring is a much more complex process.

Is matlab this optimised? How can this be?

Thanks in advance for your answer !

ArthurHello everybody!

I had a question about how matlab computed the absolute value function.

Indeed, i was working on a school project and had to optimize a code that returned the smallest euclidian distance from the origin of a 3-D cloud of points. Simple enough, my first thought was to use the basic :

d = x.^2 + y.^2 + z.^2;

dmin = sqrt(min(d));

But then, thinking that squaring was a expensive operation, especially on a data set that can be bilions of points big, i tried to implement a "pre-check". My idea was the following: the 1-norm will be way cheaper to compute and so i will be able to compute a first "estimate" of the smallest distance very quickly. I will then use this to only compute the 2-norm of the points that are "close enough" to the origin on a way smaller data set.

so i did this:

dEstimate = abs(x) + abs(y) + abs(z);

dMinEstimate = min(dEstimate);

threshold = dMinEstimate*sqrt(3); %the closest point in the 2-norm can be at most sqrt(3) times further than the closest in the 1-norm.

potentialMin = find(dEstimate < threshold);

d = x(potentialMin).^2 + y(potentialMin).^2 + y(potentialMin).^2

dmin = sqrt(min(d));

But to my surprise, when using the profile viewer to compare the codes, this is what i got:

As you can see, the results are averaged over 100 calls in order to reduce processing noise. Keep in mind that the data is one milion points here.

How can the square operation time be close or even faster than the absolute value?

I would assume that the absolute value is computed by setting to 0 the sign bit in the IEEE 754 double convention, which is a very simple operation while the squaring is a much more complex process.

Is matlab this optimised? How can this be?

Thanks in advance for your answer !

Arthur Hello everybody!

I had a question about how matlab computed the absolute value function.

Indeed, i was working on a school project and had to optimize a code that returned the smallest euclidian distance from the origin of a 3-D cloud of points. Simple enough, my first thought was to use the basic :

d = x.^2 + y.^2 + z.^2;

dmin = sqrt(min(d));

But then, thinking that squaring was a expensive operation, especially on a data set that can be bilions of points big, i tried to implement a "pre-check". My idea was the following: the 1-norm will be way cheaper to compute and so i will be able to compute a first "estimate" of the smallest distance very quickly. I will then use this to only compute the 2-norm of the points that are "close enough" to the origin on a way smaller data set.

so i did this:

dEstimate = abs(x) + abs(y) + abs(z);

dMinEstimate = min(dEstimate);

threshold = dMinEstimate*sqrt(3); %the closest point in the 2-norm can be at most sqrt(3) times further than the closest in the 1-norm.

potentialMin = find(dEstimate < threshold);

d = x(potentialMin).^2 + y(potentialMin).^2 + y(potentialMin).^2

dmin = sqrt(min(d));

But to my surprise, when using the profile viewer to compare the codes, this is what i got:

As you can see, the results are averaged over 100 calls in order to reduce processing noise. Keep in mind that the data is one milion points here.

How can the square operation time be close or even faster than the absolute value?

I would assume that the absolute value is computed by setting to 0 the sign bit in the IEEE 754 double convention, which is a very simple operation while the squaring is a much more complex process.

Is matlab this optimised? How can this be?

Thanks in advance for your answer !

Arthur optimization, abs, square MATLAB Answers — New Questions