By xis


2011-06-21 18:49:55 8 Comments

I am doing some numerical optimization on a scientific application. One thing I noticed is that GCC will optimize the call pow(a,2) by compiling it into a*a, but the call pow(a,6) is not optimized and will actually call the library function pow, which greatly slows down the performance. (In contrast, Intel C++ Compiler, executable icc, will eliminate the library call for pow(a,6).)

What I am curious about is that when I replaced pow(a,6) with a*a*a*a*a*a using GCC 4.5.1 and options "-O3 -lm -funroll-loops -msse4", it uses 5 mulsd instructions:

movapd  %xmm14, %xmm13
mulsd   %xmm14, %xmm13
mulsd   %xmm14, %xmm13
mulsd   %xmm14, %xmm13
mulsd   %xmm14, %xmm13
mulsd   %xmm14, %xmm13

while if I write (a*a*a)*(a*a*a), it will produce

movapd  %xmm14, %xmm13
mulsd   %xmm14, %xmm13
mulsd   %xmm14, %xmm13
mulsd   %xmm13, %xmm13

which reduces the number of multiply instructions to 3. icc has similar behavior.

Why do compilers not recognize this optimization trick?

12 comments

@picomancer 2014-03-29 06:51:06

GCC does actually optimize aaaaaa to (aaa)(aaa) when a is an integer. I tried with this command:

$ echo 'int f(int x) { return x*x*x*x*x*x; }' | gcc -o - -O2 -S -masm=intel -x c -

There are a lot of gcc flags but nothing fancy. They mean: Read from stdin; use O2 optimization level; output assembly language listing instead of a binary; the listing should use Intel assembly language syntax; the input is in C language (usually language is inferred from input file extension, but there is no file extension when reading from stdin); and write to stdout.

Here's the important part of the output. I've annotated it with some comments indicating what's going on in the assembly language:

; x is in edi to begin with.  eax will be used as a temporary register.
mov  eax, edi  ; temp = x
imul eax, edi  ; temp = x * temp
imul eax, edi  ; temp = x * temp
imul eax, eax  ; temp = temp * temp

I'm using system GCC on Linux Mint 16 Petra, an Ubuntu derivative. Here's the gcc version:

$ gcc --version
gcc (Ubuntu/Linaro 4.8.1-10ubuntu9) 4.8.1

As other posters have noted, this option is not possible in floating point, because floating point arithmetic is actually not associative.

@Peter Cordes 2015-07-30 05:18:59

This is legal for integer multiplication because two's complement overflow is undefined behaviour. If there's going to be an overflow, it will happen somewhere, regardless of reordering operations. So, expressions with no overflow evaluate the same, expressions that overflow are undefined behaviour so it's ok for the compiler to change the point at which overflow happens. gcc does this with unsigned int, too.

@vinc17 2014-06-27 21:03:11

No posters have mentioned the contraction of floating expressions yet (ISO C standard, 6.5p8 and 7.12.2). If the FP_CONTRACT pragma is set to ON, the compiler is allowed to regard an expression such as a*a*a*a*a*a as a single operation, as if evaluated exactly with a single rounding. For instance, a compiler may replace it by an internal power function that is both faster and more accurate. This is particularly interesting as the behavior is partly controlled by the programmer directly in the source code, while compiler options provided by the end user may sometimes be used incorrectly.

The default state of the FP_CONTRACT pragma is implementation-defined, so that a compiler is allowed to do such optimizations by default. Thus portable code that needs to strictly follow the IEEE 754 rules should explicitly set it to OFF.

If a compiler doesn't support this pragma, it must be conservative by avoiding any such optimization, in case the developer has chosen to set it to OFF.

GCC doesn't support this pragma, but with the default options, it assumes it to be ON; thus for targets with a hardware FMA, if one wants to prevent the transformation a*b+c to fma(a,b,c), one needs to provide an option such as -ffp-contract=off (to explicitly set the pragma to OFF) or -std=c99 (to tell GCC to conform to some C standard version, here C99, thus follow the above paragraph). In the past, the latter option was not preventing the transformation, meaning that GCC was not conforming on this point: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=37845

@Pascal Cuoq 2014-06-27 21:19:43

Long-lived popular questions sometimes show their age. This question was asked and answered in 2011, when GCC could be excused for not respecting exactly the then recent C99 standard. Of course now it's 2014, so GCC… ahem.

@Pascal Cuoq 2014-06-27 21:22:57

Shouldn't you be answering comparatively recent floating-point questions without an accepted answer instead, though? cough stackoverflow.com/questions/23703408 cough

@David Monniaux 2016-11-19 15:17:58

I find it... disturbing that gcc does not implement C99 floating-point pragmas.

@Tim Seguine 2018-09-04 07:57:54

@DavidMonniaux pragmas are by definition optional to implement.

@vinc17 2018-09-04 10:25:19

@TimSeguine But if a pragma is not implemented, its default value needs to be the most restrictive for the implementation. I suppose that's what David was thinking about. With GCC, this is now fixed for FP_CONTRACT if one uses an ISO C mode: it still does not implement the pragma, but in an ISO C mode, it now assumes that the pragma is off.

@GameDeveloper 2015-01-03 16:40:39

Library functions like "pow" are usually carefully crafted to yield the minimum possible error (in generic case). This is usually achieved approximating functions with splines (according to Pascal's comment the most common implementation seems to be using Remez algorithm)

fundamentally the following operation:

pow(x,y);

has a inherent error of approximately the same magnitude as the error in any single multiplication or division.

While the following operation:

float a=someValue;
float b=a*a*a*a*a*a;

has a inherent error that is greater more than 5 times the error of a single multiplication or division (because you are combining 5 multiplications).

The compiler should be really carefull to the kind of optimization it is doing:

  1. if optimizing pow(a,6) to a*a*a*a*a*a it may improve performance, but drastically reduce the accuracy for floating point numbers.
  2. if optimizing a*a*a*a*a*a to pow(a,6) it may actually reduce the accuracy because "a" was some special value that allows multiplication without error (a power of 2 or some small integer number)
  3. if optimizing pow(a,6) to (a*a*a)*(a*a*a) or (a*a)*(a*a)*(a*a) there still can be a loss of accuracy compared to pow function.

In general you know that for arbitrary floating point values "pow" has better accuracy than any function you could eventually write, but in some special cases multiple multiplications may have better accuracy and performance, it is up to the developer choosing what is more appropriate, eventually commenting the code so that noone else would "optimize" that code.

The only thing that make sense (personal opinion, and apparently a choice in GCC wichout any particular optimization or compiler flag) to optimize should be replacing "pow(a,2)" with "a*a". That would be the only sane thing a compiler vendor should do.

@GameDeveloper 2015-01-03 16:59:27

downvoters should realize that this answer is perfectly fine. I can quote dozens of sources and documention to supporting my answer and I'm probably more involved with floating point precision than any downvoter would be. It is perfectly reasonable in StackOverflow adding missing information that other answers does not cover, so be polite and explain your reasons.

@Pascal Cuoq 2015-01-03 17:33:44

It seems to me that Stephen Canon's answer covers what you have to say. You seem to insist that libms are implemented with splines: they more typically use argument reduction (depending of the function being implemented) plus a single polynomial the coefficients of which have been obtained by more or less sophisticated variants of the Remez algorithm. Smoothness at junction points is not considered an objective worth pursuing for libm functions (if they end up accurate enough, they are automatically quite smooth anyway regardless of how many pieces the domain was split into).

@Pascal Cuoq 2015-01-03 17:35:53

The second half of your answer completely misses the point that compilers are supposed to produce code that implements what the source code says, period. Also you use the word “precision” when you mean “accuracy”.

@GameDeveloper 2015-01-03 22:35:03

Thanks for your input, I slightly corrected the answer, something new is still present in the last 2 lines ^^

@Charles 2016-06-16 18:44:57

gcc actually can do this optimization, even for floating-point numbers. For example,

double foo(double a) {
  return a*a*a*a*a*a;
}

becomes

foo(double):
    mulsd   %xmm0, %xmm0
    movapd  %xmm0, %xmm1
    mulsd   %xmm0, %xmm1
    mulsd   %xmm1, %xmm0
    ret

with -O -funsafe-math-optimizations. This reordering violates IEEE-754, though, so it requires the flag.

Signed integers, as Peter Cordes pointed out in a comment, can do this optimization without -funsafe-math-optimizations since it holds exactly when there is no overflow and if there is overflow you get undefined behavior. So you get

foo(long):
    movq    %rdi, %rax
    imulq   %rdi, %rax
    imulq   %rdi, %rax
    imulq   %rax, %rax
    ret

with just -O. For unsigned integers, it's even easier since they work mod powers of 2 and so can be reordered freely even in the face of overflow.

@Peter Cordes 2016-06-17 00:09:53

Godbolt link with double, int and unsigned. gcc and clang both optimize all three the same way (with -ffast-math)

@Charles 2016-06-17 00:48:28

@PeterCordes Thanks!

@sanjoyd 2011-06-22 22:39:13

Another similar case: most compilers won't optimize a + b + c + d to (a + b) + (c + d) (this is an optimization since the second expression can be pipelined better) and evaluate it as given (i.e. as (((a + b) + c) + d)). This too is because of corner cases:

float a = 1e35, b = 1e-5, c = -1e35, d = 1e-5;
printf("%e %e\n", a + b + c + d, (a + b) + (c + d));

This outputs 1.000000e-05 0.000000e+00

@GameDeveloper 2014-07-07 08:22:52

This is not exactly the same. Changin the order of multiplications/divisions (excluding division by 0) is safer than changin order of sum/subtraction. In my humble opinion, the compiler should try to associate mults./divs. because doing that reduce the total number of operations and beside the performance gain ther's also a precision gain.

@Ben Voigt 2015-03-04 17:47:58

@DarioOO: It is no safer. Multiply and divide are the same as addition and subtraction of the exponent, and changing the order can easily cause temporaries to exceed the possible range of the exponent. (Not exactly the same, because the exponent doesn't suffer loss of precision... but the representation is still quite limited, and reordering can lead to unrepresentable values)

@GameDeveloper 2015-03-05 08:49:32

I think you are missing some calculus background. Multplying and dividing 2 numbers introduce the same amount of error. While subtracting / addition 2 numbers may introduce a bigger error especially when the 2 numbers are order of magnitudes different, hence it is safer re-arrangin mul/divide than sub/add because it introduce a minor change in final error.

@Peter Cordes 2015-07-30 04:37:45

@DarioOO: the risk is different with mul/div: Reordering either makes a negligible change in the final result, or the exponent overflows at some point (where it wouldn't have before) and the result is massively different (potentially +inf or 0).

@Szabolcs 2011-06-23 11:44:53

Fortran (designed for scientific computing) has a built-in power operator, and as far as I know Fortran compilers will commonly optimize raising to integer powers in a similar fashion to what you describe. C/C++ unfortunately don't have a power operator, only the library function pow(). This doesn't prevent smart compilers from treating pow specially and computing it in a faster way for special cases, but it seems they do it less commonly ...

Some years ago I was trying to make it more convenient to calculate integer powers in an optimal way, and came up with the following. It's C++, not C though, and still depends on the compiler being somewhat smart about how to optimize/inline things. Anyway, hope you might find it useful in practice:

template<unsigned N> struct power_impl;

template<unsigned N> struct power_impl {
    template<typename T>
    static T calc(const T &x) {
        if (N%2 == 0)
            return power_impl<N/2>::calc(x*x);
        else if (N%3 == 0)
            return power_impl<N/3>::calc(x*x*x);
        return power_impl<N-1>::calc(x)*x;
    }
};

template<> struct power_impl<0> {
    template<typename T>
    static T calc(const T &) { return 1; }
};

template<unsigned N, typename T>
inline T power(const T &x) {
    return power_impl<N>::calc(x);
}

Clarification for the curious: this does not find the optimal way to compute powers, but since finding the optimal solution is an NP-complete problem and this is only worth doing for small powers anyway (as opposed to using pow), there's no reason to fuss with the detail.

Then just use it as power<6>(a).

This makes it easy to type powers (no need to spell out 6 as with parens), and lets you have this kind of optimization without -ffast-math in case you have something precision dependent such as compensated summation (an example where the order of operations is essential).

You can probably also forget that this is C++ and just use it in the C program (if it compiles with a C++ compiler).

Hope this can be useful.

EDIT:

This is what I get from my compiler:

For a*a*a*a*a*a,

    movapd  %xmm1, %xmm0
    mulsd   %xmm1, %xmm0
    mulsd   %xmm1, %xmm0
    mulsd   %xmm1, %xmm0
    mulsd   %xmm1, %xmm0
    mulsd   %xmm1, %xmm0

For (a*a*a)*(a*a*a),

    movapd  %xmm1, %xmm0
    mulsd   %xmm1, %xmm0
    mulsd   %xmm1, %xmm0
    mulsd   %xmm0, %xmm0

For power<6>(a),

    mulsd   %xmm0, %xmm0
    movapd  %xmm0, %xmm1
    mulsd   %xmm0, %xmm1
    mulsd   %xmm0, %xmm1

@Marc Glisse 2013-01-31 19:11:49

Finding the optimal power tree might be hard, but since it is only interesting for small powers, the obvious answer is to precompute it once (Knuth provides a table up to 100) and use that hardcoded table (that's what gcc does internally for powi).

@gnasher729 2014-03-10 16:46:04

On modern processors, the speed is limited by latency. For example, the result of a multiplication might be available after five cycles. In that situation, finding the fastest way to create some power might be more tricky.

@gnasher729 2014-03-10 16:52:38

You could also try finding the power tree that gives the lowest upper bound for the relative rounding error, or the lowest average relative rounding error.

@gast128 2017-08-03 12:43:35

Boost has also support for this, e.g. boost::math::pow<6>(n); I think it even tries to reduce the number of multiplications by extracting common factors.

@Caduchon 2017-10-03 13:23:34

Nice idea ! I already did that for factorial precomputing.

@minmaxavg 2018-01-05 16:01:26

Note that the last one is equivalent to (a**2)**3

@Lambdageek 2011-06-21 18:56:51

Because Floating Point Math is not Associative. The way you group the operands in floating point multiplication has an effect on the numerical accuracy of the answer.

As a result, most compilers are very conservative about reordering floating point calculations unless they can be sure that the answer will stay the same, or unless you tell them you don't care about numerical accuracy. For example: the -fassociative-math option of gcc which allows gcc to reassociate floating point operations, or even the -ffast-math option which allows even more aggressive tradeoffs of accuracy against speed.

@xis 2011-06-21 19:09:42

Yes. With -ffast-math it is doing such optimization. Good idea! But since our code concerns more accuracy than the speed, it might be better not to pass it.

@tc. 2011-06-22 02:19:45

IIRC C99 allows the compiler to do such "unsafe" FP optimizations, but GCC (on anything other than the x87) makes a reasonable attempt at following IEEE 754 - it's not "error bounds"; there is only one correct answer.

@Stephen Canon 2013-01-03 02:19:48

The implementation details of pow are neither here nor there; this answer doesn't even reference pow.

@user1899861 2013-08-03 01:45:18

@Lohoris, the basis of my argument was that any convergence routine, dependent on the same floating point hardware as a series of multiplies, could not possibly be more accurate. This turns out not to be true, because the convergence routines cheat, in a good way. They interpolate between precomputed table values that lie on powers of 2 boundaries. The error in interpolation is therefore very small. I will also tell you that for small powers, like 6, an integer power function does just as well. Still, if you were a compiler writer, I'm sure you'd put those optimizations in pow().

@nedR 2014-03-27 17:59:30

So my question is.. does that mean that the Intel Compiler is performing the optimization at the expense of accuracy and correctness? Or does it find another way to optimize while ensuring correctness?

@Stephen Canon 2014-03-27 18:19:26

@nedR: ICC defaults to allowing re-association. If you want to get standard-conforming behavior, you need to set -fp-model precise with ICC. clang and gcc default to strict conformance w.r.t. reassociation.

@Paul Draper 2014-08-24 16:11:21

@xis, it's not really that -fassociative-math would be inaccurrate; it's just that a*a*a*a*a*a and (a*a*a)*(a*a*a) are different. It's not about accuracy; it's about standards conformance and strictly repeatable results, e.g. same results on any compiler. Floating point numbers are already not exact. It is seldomly inappropriate to compile with -fassociative-math.

@user877329 2017-05-28 09:02:12

If you want accuracy, prefer (aaa)*(aaa). Possible reasons are more balanced sizes of involved operands, aaaaa >> a (>> much greater than), and fewer operations, reducing the number of truncations.

@Mark Ransom 2011-06-21 18:52:49

I would not have expected this case to be optimized at all. It can't be very often where an expression contains subexpressions that can be regrouped to remove entire operations. I would expect compiler writers to invest their time in areas which would be more likely to result in noticeable improvements, rather than covering a rarely encountered edge case.

I was surprised to learn from the other answers that this expression could indeed be optimized with the proper compiler switches. Either the optimization is trivial, or it is an edge case of a much more common optimization, or the compiler writers were extremely thorough.

There's nothing wrong with providing hints to the compiler as you've done here. It's a normal and expected part of the micro-optimization process to rearrange statements and expressions to see what differences they will bring.

While the compiler may be justified in considering the two expressions to deliver inconsistent results (without the proper switches), there's no need for you to be bound by that restriction. The difference will be incredibly tiny - so much so that if the difference matters to you, you should not be using standard floating point arithmetic in the first place.

@Alice 2014-06-30 13:29:05

As noted by another commenter, this is untrue to the point of being absurd; the difference could be as much as half to 10% of the cost, and if run in a tight loop, that will translate to many instructions wasted to get what could be an insignificant amount of additional precision. Saying you shouldn't be using standard FP when you are doing a monte carlo is sort of like saying you should always use an airplane to get across country; it ignores many externalities. Finally, this is NOT an uncommon optimization; dead code analysis and code reduction/refactor is very common.

@Stephen Canon 2011-06-22 15:32:18

Lambdageek correctly points out that because associativity does not hold for floating-point numbers, the "optimization" of a*a*a*a*a*a to (a*a*a)*(a*a*a) may change the value. This is why it is disallowed by C99 (unless specifically allowed by the user, via compiler flag or pragma). Generally, the assumption is that the programmer wrote what she did for a reason, and the compiler should respect that. If you want (a*a*a)*(a*a*a), write that.

That can be a pain to write, though; why can't the compiler just do [what you consider to be] the right thing when you use pow(a,6)? Because it would be the wrong thing to do. On a platform with a good math library, pow(a,6) is significantly more accurate than either a*a*a*a*a*a or (a*a*a)*(a*a*a). Just to provide some data, I ran a small experiment on my Mac Pro, measuring the worst error in evaluating a^6 for all single-precision floating numbers between [1,2):

worst relative error using    powf(a, 6.f): 5.96e-08
worst relative error using (a*a*a)*(a*a*a): 2.94e-07
worst relative error using     a*a*a*a*a*a: 2.58e-07

Using pow instead of a multiplication tree reduces the error bound by a factor of 4. Compilers should not (and generally do not) make "optimizations" that increase error unless licensed to do so by the user (e.g. via -ffast-math).

Note that GCC provides __builtin_powi(x,n) as an alternative to pow( ), which should generate an inline multiplication tree. Use that if you want to trade off accuracy for performance, but do not want to enable fast-math.

@TkTech 2011-06-22 17:04:17

Note also that Visual C++ provides an 'enhanced' version of pow(). By calling _set_SSE2_enable(<flag>) with flag=1, it will use SSE2 if possible. This reduces accuracy by a bit, but improves speeds (in some cases). MSDN: _set_SSE2_enable() and pow()

@Stephen Canon 2011-06-22 17:10:27

@xis19: With a good math library, the same will hold for double (and actually, for any supported floating-point type).

@Stephen Canon 2011-06-22 17:12:46

@TkTech: using SSE2 need not necessarily reduce accuracy, even. Most modern math libraries on x86 use SSE when it's available, and many of them deliver very accurate results.

@TkTech 2011-06-22 17:32:25

@Stephen: The MSDN documentation itself notes that there may be reduced accuracy when SSE2 instructions are used (as they are by default) due to the intermediate registers being 80bit on the FPU and 64bit when using SSE2.

@Stephen Canon 2011-06-22 17:37:23

@TkTech: Any reduced accuracy is due to Microsoft's implementation, not the size of the registers used. It's possible to deliver a correctly-rounded pow using only 32-bit registers, if the library writer is so motivated. There are SSE-based pow implementations that are more accurate than most x87-based implementations, and there are also implementations that trade off some accuracy for speed.

@TkTech 2011-06-22 17:45:06

@Stephen: Sure, you can make a more accurate implementation using nothing 8bit registers and a little array. However, I was and still am specifically talking about Visual C++'s implementation of pow(). "Could" and "possible" are not "IS" :)

@Stephen Canon 2011-06-22 17:56:20

@TkTech: Of course, I just wanted to make clear that the reduction in accuracy is due to the choices made by the library writers, not intrinsic to the use of SSE.

@j_random_hacker 2013-09-24 22:44:52

I'm interested to know what you used as the "gold standard" here for calculating relative errors -- I would normally have expected it would be a*a*a*a*a*a, but that is apparently not the case! :)

@Stephen Canon 2013-09-24 22:47:30

@j_random_hacker: since I was comparing single-precision results, double-precision suffices for a gold standard — the error from aaaaaa computed in double is *vastly smaller than the error of any of the single-precision computations.

@David Hammen 2014-04-14 16:57:12

@StephenCanon - Three comments. (a) +1. This is a very nice answer. (b) An even better gold standard: std::pow((long double)a,6). (c) There is a third way: use double precision for intermediate calculations, for example calling Szabolcs's power template function via power<6,double>(a). Now you get half an ULP accuracy (as a float result) but with only a smallish performance penalty (1.4 times longer than a*a*a*a*a*a as a float). Compare with the huge performance penalty (32.4 times longer on my machine) that results from calling std::pow(float,float).

@Alice 2014-06-30 13:23:05

@DavidHammen Long Double would do nothing on MSVC's implementation, and more than one other; they type pun long double to just double. You'd need to make sure long double was well supported before saying it's a better gold standard.

@Rok Kralj 2015-08-11 17:18:31

Maybe throw (a*a)*(a*a)*(a*a) into the mix, too. Same number of multiplications, but probably more accurate.

@peterh 2016-04-26 10:17:04

"Generally, the assumption is that the programmer wrote what she did for a reason, and the compiler should respect that. If you want (aaa)*(aaa), write that." On this reasoning, everything could/should be forgotten since the first macro-capable assemblers...

@Rastaban 2013-10-01 19:33:31

There are already a few good answers to this question, but for the sake of completeness I wanted to point out that the applicable section of the C standard is 5.1.2.2.3/15 (which is the same as section 1.9/9 in the C++11 standard). This section states that operators can only be regrouped if they are really associative or commutative.

@Bjorn 2011-06-23 12:44:13

As Lambdageek pointed out float multiplication is not associative and you can get less accuracy, but also when get better accuracy you can argue against optimisation, because you want a deterministic application. For example in game simulation client/server, where every client has to simulate the same world you want floating point calculations to be deterministic.

@Alice 2014-06-30 13:29:30

Floating point is always deterministic.

@greggo 2014-09-04 21:48:00

@Alice - only when the compiler doesn't reorder things, possibly in different ways according to the compiler version, target machine, etc.

@Alice 2014-09-06 19:38:34

@greggo No, it's still deterministic then. No randomness is added in any sense of the word.

@greggo 2014-09-08 14:15:06

@Alice It seems fairly clear Bjorn here is using 'deterministic' in the sense of the code giving the same result on different platforms and different compiler versions etc (external variables which may be beyond the control of the programmer) -- as opposed to lack of actual numeric randomness at run time. If you are pointing out that this isn't a proper use of the word, I'm not going to argue with that.

@Alice 2014-09-09 18:44:40

@greggo Except even in your interpretation of what he says, it's still wrong; that's the entire point of IEEE 754, to provide identical characteristics for most (if not all) operations across platforms. Now, he made no mention of platforms or compiler versions, which would be a valid concern if you want every single operation on every remote server/client to be identical....but this isn't obvious from his statement. A better word might be "reliably similar" or something.

@Lanaru 2014-12-03 14:59:45

@Alice you're wasting everybody's time, including your own, by arguing semantics. His meaning was clear.

@Alice 2014-12-08 02:54:01

@Lanaru The entire point of standards IS semantics; his meaning was decidedly not clear.

@user811773 2011-06-23 10:07:41

Because a 32-bit floating-point number - such as 1.024 - is not 1.024. In a computer, 1.024 is an interval: from (1.024-e) to (1.024+e), where "e" represents an error. Some people fail to realize this and also believe that * in a*a stands for multiplication of arbitrary-precision numbers without there being any errors attached to those numbers. The reason why some people fail to realize this is perhaps the math computations they exercised in elementary schools: working only with ideal numbers without errors attached, and believing that it is OK to simply ignore "e" while performing multiplication. They do not see the "e" implicit in "float a=1.2", "a*a*a" and similar C codes.

Should majority of programmers recognize (and be able to execute on) the idea that C expression a*a*a*a*a*a is not actually working with ideal numbers, the GCC compiler would then be FREE to optimize "a*a*a*a*a*a" into say "t=(a*a); t*t*t" which requires a smaller number of multiplications. But unfortunately, the GCC compiler does not know whether the programmer writing the code thinks that "a" is a number with or without an error. And so GCC will only do what the source code looks like - because that is what GCC sees with its "naked eye".

... once you know what kind of programmer you are, you can use the "-ffast-math" switch to tell GCC that "Hey, GCC, I know what I am doing!". This will allow GCC to convert a*a*a*a*a*a into a different piece of text - it looks different from a*a*a*a*a*a - but still computes a number within the error interval of a*a*a*a*a*a. This is OK, since you already know you are working with intervals, not ideal numbers.

@Donal Fellows 2011-06-24 13:35:46

Floating point numbers are exact. They're just not necessarily exactly what you expected. Moreover, the technique with epsilon is itself an approximation to how to tackle things in reality, because the true expected error is relative to the scale of the mantissa, i.e., you're normally up to about 1 LSB out, but that can increase with every operation performed if you're not careful so consult a numerical analyst before doing anything non-trivial with floating point. Use a proper library if you possibly can.

@supercat 2012-11-18 15:15:51

@DonalFellows: The IEEE standard requires that floating-point calculations yield the result that most accurately matches what the result would be if the source operands were exact values, but that does not mean they actually represent exact values. It is in many cases more helpful to regard 0.1f as being (1,677,722 +/- 0.5)/16,777,216, which should be displayed with the number of decimal digits implied by that uncertainty, than to regard it as exact quantity (1,677,722 +/- 0.5)/16,777,216 (which should be displayed to 24 decimal digits).

@Stephen Canon 2013-01-04 13:35:41

@supercat: IEEE-754 is pretty clear on the point that floating-point data do represent exact values; clauses 3.2 - 3.4 are the relevant sections. You can, of course, choose to interpret them otherwise, just as you can choose to interpret int x = 3 as meaning that x is 3+/-0.5.

@supercat 2013-01-04 16:15:07

@StephenCanon: I suppose it depends what you mean by "represent". In most applications, variables are used to model concrete things. In a physics simulation, for example, they may represent the X, Y, and Z components of various objects' positions and velocities, etc. If I say Distance = Math.Sqrt((x2-x1)*(x2-x1)+(y2-y1)*(y2-y1)+(z2-z1)*(z2-z1)), the purpose of Distance is to represent the Euclidean distance between (x1,y1,z1) and (x2,y2,z2). It's unlikely that the precise number stored in Distance will be the precise Euclidean distance between the two points, but...

@Stephen Canon 2013-01-04 16:22:01

@supercat: I agree entirely, but that doesn't mean that Distance isn't exactly equal to its numerical value; it means that numerical value is only an approximation to some physical quantity being modeled.

@supercat 2013-01-04 16:24:40

...nonetheless conventional usage would be to say that Distance represents that value, or perhaps Distance represents something that's for all practical purposes "close enough" to value, rather than explicitly stating that Distance represents the precise floating-point numerical value which would results from performing the aforementioned sequence of operations. From the point of view of the hardware performing the math primitives (multiplies, adds, sqrt, etc.) the quantities need to be evaluated exactly, but to the consumer they represent approximations.

@supercat 2013-01-04 16:27:40

My point is that if code performs someSingle = 1.0/10.0, the result will be as precise as the consumer is going to expect; if code performs someDouble = 1.0f/10.0f, the result is going to be off by many orders of magnitude more than a consumer which knew the float quantities happened to represent precise values would be apt to expect. If a Double is cast to Float and never cast back, the user will have no surprises with regard to accuracy. Conversions from Float to Double, however, are far more likely to yield "surprises".

@gnasher729 2014-03-10 16:50:18

For numerical analysis, your brain will thank you if you interpret floating point numbers not as intervals, but as exact values (which happen to be not exactly the values that you wanted). For example, if x is somewhere round 4.5 with an error less than 0.1, and you calculate (x + 1) - x, the "interval" interpretation leaves you with an interval from 0.8 to 1.2, while the "exact value" interpretation tells you the result will be 1 with an error of at most 2^(-50) in double precision.

Related Questions

Sponsored Content

14 Answered Questions

[SOLVED] Why not use Double or Float to represent currency?

10 Answered Questions

[SOLVED] What is the difference between g++ and gcc?

  • 2008-10-05 20:25:13
  • Brian R. Bondy
  • 411798 View
  • 790 Score
  • 10 Answer
  • Tags:   c++ gcc g++

9 Answered Questions

[SOLVED] Swift Beta performance: sorting arrays

5 Answered Questions

10 Answered Questions

1 Answered Questions

[SOLVED] Why does Intel's compiler prefer NEG+ADD over SUB?

6 Answered Questions

5 Answered Questions

[SOLVED] Why does the C preprocessor interpret the word "linux" as the constant "1"?

4 Answered Questions

2 Answered Questions

[SOLVED] Optimize templates compilation time in c++/gcc

Sponsored Content