20120203 23:26:14 8 Comments
I want to determine (in c++) if one float number is the multiplicative inverse of another float number. The problem is that i have to use a third variable to do it. For instance this code:
float x=5,y=0.2;
if(x==(1/y)) cout<<"They are the multiplicative inverse of eachother"<<endl;
else cout<<"They are NOT the multiplicative inverse of eachother"<<endl;
will output: "they are not..." which is wrong and this code:
float x=5,y=0.2,z;
z=1/y;
if(x==z) cout<<"They are the multiplicative inverse of eachother"<<endl;
else cout<<"They are NOT the multiplicative inverse of eachother"<<endl;
will output: "they are..." which is right.
why is this happening?
Related Questions
Sponsored Content
30 Answered Questions
[SOLVED] Is floating point math broken?
 20090225 21:39:02
 Cato Johnston
 318967 View
 3031 Score
 30 Answer
 Tags: math languageagnostic floatingpoint floatingaccuracy
53 Answered Questions
[SOLVED] How do I check if an array includes a value in JavaScript?
 20081025 22:14:40
 brad
 2579648 View
 4034 Score
 53 Answer
 Tags: javascript arrays algorithm timecomplexity javascriptobjects
79 Answered Questions
27 Answered Questions
[SOLVED] How do I parse a string to a float or int?
 20081219 01:52:26
 Tristan Havelick
 3947256 View
 2276 Score
 27 Answer
 Tags: python parsing floatingpoint typeconversion integer
18 Answered Questions
[SOLVED] Difference between decimal, float and double in .NET?
 20090306 11:31:23
 Tom
 934270 View
 2102 Score
 18 Answer
 Tags: .net floatingpoint double decimal
42 Answered Questions
[SOLVED] How to deal with floating point number precision in JavaScript?
 20090922 07:34:42
 Juri
 390675 View
 630 Score
 42 Answer
 Tags: javascript floatingpoint
36 Answered Questions
[SOLVED] How can I pair socks from a pile efficiently?
 20130119 15:34:35
 amit
 412145 View
 3919 Score
 36 Answer
 Tags: algorithm sorting languageagnostic matching
33 Answered Questions
[SOLVED] How do I check if a string is a number (float)?
 20081209 20:03:42
 Daniel Goldberg
 1383533 View
 1624 Score
 33 Answer
 Tags: python casting floatingpoint typeconversion
25 Answered Questions
[SOLVED] Limiting floats to two decimal points
 20090118 18:16:41
 kevin
 3331919 View
 1704 Score
 25 Answer
 Tags: python floatingpoint rounding precision
11 Answered Questions
[SOLVED] How dangerous is it to compare floating point values?
 20120426 13:41:12
 Proud Member
 89491 View
 392 Score
 11 Answer
 Tags: objectivec ios c floatingpoint floatingaccuracy
5 comments
@Alexey Frunze 20120216 15:14:41
The discussions in other replies are great and so I won't repeat any of them, but there's no code. Here's a little bit of code to actually check if a pair of floats gives exactly 1.0 when multiplied.
The code makes a few assumptions/assertions (which are normally met on the x86 platform):

float
's are 32bit binary (AKAsingle precision
)IEEE754
 either
int
's orlong
's are 32bit (I decided not to rely on the availability ofuint32_t
)
memcpy()
copies floats to ints/longs such that 8873283.0f becomes 0x4B076543 (i.e. certain "endianness" is expected)One extra assumption is this:
 it receives the actual floats that
*
would multiply (i.e. multiplication of floats wouldn't use higher precision values that the math hardware/library can use internally)Output (see at ideone.com):
@Gangnus 20120216 16:28:56
+1. So, the binary fractions are precise there. Haven't you tried 2^(100)* 2^(+100)?
@Alexey Frunze 20120216 17:06:16
@Gangnus: Sure, if it's binary, powers of 2 are exact. See the updated code on ideone. We don't even need all significant digits of 2^100 or 2^100 in decimal.
@Gangnus 20120216 20:11:04
I meant, that above some power there will be problems to place the power of 2 into the secont part of the float.
@Alexey Frunze 20120216 20:35:16
@Gangnus: Beyond the maximum exponent there's only infinity (the code returns 0 on Inf's and NaN's). Below the minimum exponent there are denormalized values (the code handles them too). See another update on ideone demonstrating a denormalized case.
@Gangnus 20120216 20:40:59
Yes. I see. Thank you. I thought how would like the operations near that border to NaN or to 0.
@Gangnus 20120203 23:33:39
The Float Precision Problem
You have two problems here, but both come from the same root
You can't compare floats precisely. You can't subtract or divide them precisely. You can't count anything for them precisely. Any operation with them could (and almost always does) bring some error into the result. Even
a=0.2f
is not a precise operation. The deeper reasons of that are very well explained by the authors of the other answers here. _{(My thanks and votes to them for that.)}Here comes your first and more simple error. You should never, never, never, never, NEVER use on them == or its equivalent in any language.
Instead of
a==b
, useAbs(ab)<HighestPossibleError
instead.But this is not the sole problem in your task.
Abs(1/yx)<HighestPossibleError
won't work, either. At least, it won't work often enough. Why?Let's take pair x=1000 and y=0.001. Let's take the "starting" relative error of y for 10^{6}.
_{(Relative error = error/value).}
Relative errors of values are adding to at multiplication and division.
1/y is about 1000. Its relative error is the same 10^{6}. ("1" hasn't errors)
That makes absolute error =1000*10^{6}=0.001. When you subtract x later, that error will be all that remains. (Absolute errors are adding to at adding and subtracting, and the error of x is negligibly small.) Surely, you are not counting on so large errors, HighestPossibleError would be surely set lower and your program would throw off a good pair of x,y
So, the next two rule for float operations: try not to divide greater valuer by lesser one and God save you from subtracting the close values after that.
There are two simple ways to escape this problem.
By founding what of x,y has the greater abs value and divide 1 by the greater one and only later to subtract the lesser one.
If you want to compare
1/y against x
, while you are working yet with letters, not values, and your operations make no errors, multiply the both sides of comparison by y and you have1 against x*y
. _{(Usually you should check signs in that operation, but here we use abs values, so, it is clean.)} The result comparison has no division at all.In a shorter way:
We already know that such comparison as
1 against x*y
should be done so:That is all.
P.S. If you really need it all on one line, use:
But it is bad style. I wouldn't advise it.
P.P.S. In your second example the compiler optimizes the code so, that it sets z to 5 before running any code. So, checking 5 against 5 works even for floats.
@Yves Daoust 20120214 11:49:07
What is striking is that whatever the rounding rule is, you expect the outcome of the two versions to be the same (either twice wrong or twice right)!
Most probably, in the first case a promotion to higher accuracy in the FPU registers takes place when evaluating x==1/y, whereas z= 1/y really stores the singleprecision result.
Other contributors have explaine why 5==1/0.2 can fail, I needn't repeat that.
@hammar 20120203 23:52:13
The problem is that
0.2
cannot be represented exactly in binary, because its binary expansion has an infinite number of digits:This is similar to how
1/3
cannot be represented exactly in decimal. Sincex
is stored in afloat
which has a finite number of bits, these digits will get cut off at some point, for example:The problem arises because CPUs often use a higher precision internally, so when you've just calculated
1/y
, the result will have more digits, and when you loadx
to compare them,x
will get extended to match the internal precision of the CPU.So when you do a direct bitbybit comparison, they are different.
In your second example, however, storing the result into a variable means it gets truncated before doing the comparison, so comparing them at this precision, they're equal:
Many compilers have switches you can enable to force intermediate values to be truncated at every step for consistency, however the usual advice is to avoid doing direct comparisons between floatingpoint values and instead check if they differ by less than some epsilon value, which is what Gangnus is suggesting.
@David Schwartz 20120203 23:42:14
You will have to precisely define what it means for two approximations to be multiplicative inverses. Otherwise, you won't know what it is you're supposed to be testing.
0.2
has no exact binary representation. If you store numbers that have no exact representation with limited precision, you won't get answers that are exactly correct.The same things happens in decimal. For example,
1/3
has no exact decimal representation. You can store it as.333333
. But then you have a problem. Are3
and.333333
multiplicative inverses? If you multiply them, you get.999999
. If you want the answer to be "yes" you'll have to create a test for multiplicative inverses that isn't as simple as multiplying and testing for equality to 1.The same thing happens with binary.