By Rz Mk


2012-01-09 21:39:03 8 Comments

Why does this code print False in .NET 4? It seems some unexpected behavior is being caused by the explicit cast.

I'd like an answer beyond "floating point is inaccurate" or "don't do that".

float a(float x, float y)
{
  return ( x * y );
}

float b(float x, float y)
{
  return (float)( x * y );
}

void Main()
{
  Console.WriteLine( a( 10f, 1f/10f ) == b( 10f, 1f/10f ) );
}

PS: This code came from a unit test, not release code. The code was written this way deliberately. I suspected it would fail eventually but I wanted to know exactly when and exactly why. The answer proves the validity of this technique because it provides an understanding that goes beyond the usual understanding of floating point determinism. And that was the point of writing this code this way; deliberate exploration.

PPS: The unit test was passing in .NET 3.5, but now fails after the upgrade to .NET 4.

2 comments

@ony 2013-04-28 18:28:13

I have no Microsoft compiler right now and Mono have no such effect. As far as I know GCC 4.3+ uses gmp and mpfr to calculate some stuff in compile time. C# compiler may do the same for non-virtual, static or private methods in same assembly. Explicit cast may interfere with such optimization (but I see no reason why it can't have same behavior). I.e. it may inline with calculating constant expression to some level (for b() it may be for example up to the cast).

GCC as well have the optimization that promotes operation to more highest precision if that makes sense.

So I'd consider both optimization as potential reason. But for both of them I see no reason why doing explicit casting of result may have some additional meaning like "be closer to standard".

@Eric Lippert 2012-01-09 21:48:18

David's comment is correct but insufficiently strong. There is no guarantee that doing that calculation twice in the same program will produce the same results.

The C# specification is extremely clear on this point:


Floating-point operations may be performed with higher precision than the result type of the operation. For example, some hardware architectures support an “extended” or “long double” floating-point type with greater range and precision than the double type, and implicitly perform all floating-point operations using this higher precision type. Only at excessive cost in performance can such hardware architectures be made to perform floating-point operations with less precision, and rather than require an implementation to forfeit both performance and precision, C# allows a higher precision type to be used for all floating-point operations. Other than delivering more precise results, this rarely has any measurable effects. However, in expressions of the form x * y / z, where the multiplication produces a result that is outside the double range, but the subsequent division brings the temporary result back into the double range, the fact that the expression is evaluated in a higher range format may cause a finite result to be produced instead of an infinity.


The C# compiler, the jitter and the runtime all have broad lattitude to give you more accurate results than are required by the specification, at any time, at a whim -- they are not required to choose to do so consistently and in fact they do not.

If you don't like that then do not use binary floating point numbers; either use decimals or arbitrary precision rationals.

I don't understand why casting to float in a method that returns float makes the difference it does

Excellent point.

Your sample program demonstrates how small changes can cause large effects. You note that in some version of the runtime, casting to float explicitly gives a different result than not doing so. When you explicitly cast to float, the C# compiler gives a hint to the runtime to say "take this thing out of extra high precision mode if you happen to be using this optimization". As the specification notes, this has a potential performance cost.

That doing so happens to round to the "right answer" is merely a happy accident; the right answer is obtained because in this case losing precision happened to lose it in the correct direction.

How is .net 4 different?

You ask what the difference is between 3.5 and 4.0 runtimes; the difference is clearly that in 4.0, the jitter chooses to go to higher precision in your particular case, and the 3.5 jitter chooses not to. That does not mean that this situation was impossible in 3.5; it has been possible in every version of the runtime and every version of the C# compiler. You've just happened to run across a case where, on your machine, they differ in their details. But the jitter has always been allowed to make this optimization, and always has done so at its whim.

The C# compiler is also completely within its rights to choose to make similar optimizations when computing constant floats at compile time. Two seemingly-identical calculations in constants may have different results depending upon details of the compiler's runtime state.

More generally, your expectation that floating point numbers should have the algebraic properties of real numbers is completely out of line with reality; they do not have those algebraic properties. Floating point operations are not even associative; they certainly do not obey the laws of multiplicative inverses as you seem to expect them to. Floating point numbers are only an approximation of real arithmetic; an approximation that is close enough for, say, simulating a physical system, or computing summary statistics, or some such thing.

@Rz Mk 2012-01-09 22:45:37

Unfortunately I already knew floating point is not accurate. What I didn't know was everything else about the casting and optimization. It'd be easier to ask the right question if I knew the answer.

@Jeffrey Sax 2012-01-10 00:21:37

I don't like the term 'inaccurate' being applied to floating-point numbers. (float)1 is 100% accurate. Adding small integers and multiplication by a power of two with a normal result is also 100% accurate. The differences in this example are the result not of 'inaccurate' calculations, but (as Eric explained at length) of the freedom of the language and runtime to choose certain implementation details.

@Rz Mk 2012-01-10 03:26:44

@JeffreySax: Whatever you want to call it, it's a thing. And it's that thing many people mistakenly or incompletely attributed this issue to. The point of the question was to get to the "other stuff". It took edits to both the question and answer for that to happen. I wish nobody mentioned the basics of floating point 'accuracy', because this issue (as Eric explained at length) is more complicated than what is commonly known of floating point 'accuracy'. That's why comments solely about floating point 'accuracy' are still being upvoted.

@Jeffrey Sax 2012-01-10 03:52:35

@kk For some more background and examples, see also this blog post from (at the time) a member of the JIT team: blogs.msdn.com/b/davidnotario/archive/2005/08/08/449092.aspx

@CodesInChaos 2012-01-10 19:33:18

I would have never expected that casting float to itself had any effect. Is there a reason why you went with such a surprising feature instead of using a built in method such as Math.ForceToSingle?

@Eric Lippert 2012-01-10 20:36:57

@CodeInChaos: This feature greatly predates my time here. I found it surprising too. The use of a cast is consistent with the way the feature is implemented in the CLR, which is to emit a conv.r4 or conv.r8 instruction.

Related Questions

Sponsored Content

31 Answered Questions

[SOLVED] How do I check if a string is a number (float)?

23 Answered Questions

[SOLVED] How do I parse a string to a float or int?

30 Answered Questions

[SOLVED] Is floating point math broken?

24 Answered Questions

[SOLVED] Limiting floats to two decimal points

26 Answered Questions

[SOLVED] Cast int to enum in C#

  • 2008-08-27 03:58:21
  • lomaxx
  • 1219487 View
  • 2919 Score
  • 26 Answer
  • Tags:   c# enums casting

11 Answered Questions

26 Answered Questions

[SOLVED] Do I cast the result of malloc?

  • 2009-03-03 10:13:02
  • Patrick McDonald
  • 207089 View
  • 2281 Score
  • 26 Answer
  • Tags:   c malloc casting

16 Answered Questions

[SOLVED] Difference between decimal, float and double in .NET?

5 Answered Questions

8 Answered Questions

[SOLVED] Regular cast vs. static_cast vs. dynamic_cast

  • 2008-08-26 13:20:55
  • Graeme Perrow
  • 656453 View
  • 1627 Score
  • 8 Answer
  • Tags:   c++ pointers casting

Sponsored Content