By Tom

2009-03-06 11:31:23 8 Comments

What is the difference between decimal, float and double in .NET?

When would someone use one of these?


@user2389722 2013-06-07 12:50:10

| C#      | .Net Framework | Signed? | Bytes    | Possible Values                             |
| Type    | (System) type  |         | Occupied |                                             |
| sbyte   | System.Sbyte   | Yes     | 1        | -128 to 127                                 |
| short   | System.Int16   | Yes     | 2        | -32768 to 32767                             |
| int     | System.Int32   | Yes     | 4        | -2147483648 to 2147483647                   |
| long    | System.Int64   | Yes     | 8        | -9223372036854775808 to 9223372036854775807 |
| byte    | System.Byte    | No      | 1        | 0 to 255                                    |
| ushort  | System.Uint16  | No      | 2        | 0 to 65535                                  |
| uint    | System.UInt32  | No      | 4        | 0 to 4294967295                             |
| ulong   | System.Uint64  | No      | 8        | 0 to 18446744073709551615                   |
| float   | System.Single  | Yes     | 4        | Approximately ±1.5 x 10-45 to ±3.4 x 1038   |
|         |                |         |          |  with 7 significant figures                 |
| double  | System.Double  | Yes     | 8        | Approximately ±5.0 x 10-324 to ±1.7 x 10308 |
|         |                |         |          |  with 15 or 16 significant figures          |
| decimal | System.Decimal | Yes     | 12       | Approximately ±1.0 x 10-28 to ±7.9 x 1028   |
|         |                |         |          |  with 28 or 29 significant figures          |
| char    | System.Char    | N/A     | 2        | Any Unicode character (16 bit)              |
| bool    | System.Boolean | N/A     | 1 / 2    | true or false                               |

See here for more information.

@BrainSlugs83 2015-03-14 22:55:28

You left out the biggest difference, which is the base used for the decimal type (decimal is stored as base 10, all other numeric types listed are base 2).

@deegee 2015-06-22 19:18:07

The value ranges for the Single and Double are not depicted correctly in the above image or the source forum post. Since we can't easily superscript the text here, use the caret character: Single should be 10^-45 and 10^38, and Double should be 10^-324 and 10^308. Also, MSDN has the float with a range of -3.4x10^38 to +3.4x10^38. Search MSDN for System.Single and System.Double in case of link changes. Single: Double:

@user1477332 2018-10-23 03:29:23

Decimal is 128 bits ... means it occupies 16 bytes not 12

@Purnima Bhatia 2020-01-17 11:26:59

To define Decimal, Float and Double in .Net (c#)

you must mention values as:

Decimal dec = 12M/6;
Double dbl = 11D/6;
float fl = 15F/6;

and check the results.

And Bytes Occupied by each are

Float - 4
Double - 8
Decimal - 12

@Reza Jenabi 2019-12-14 12:16:36

  • Decimal 128 bit (28-29 significant digits) In case of financial applications it is better to use Decimal types because it gives you a high level of accuracy and easy to avoid rounding errors Use decimal for non-integer math where precision is needed (e.g. money and currency)

  • Double 64 bit (15-16 digits) Double Types are probably the most normally used data type for real values, except handling money. Use double for non-integer math where the most precise answer isn't necessary.

  • Float 32 bit (7 digits) It is used mostly in graphic libraries because very high demands for processing powers, also used situations that can endure rounding errors.

Decimals are much slower than a double/float.

Decimals and Floats/Doubles cannot be compared without a cast whereas Floats and Doubles can.

Decimals also allow the encoding or trailing zeros.

@Mukesh Kumar 2014-01-02 05:01:41

  • float: ±1.5 x 10^-45 to ±3.4 x 10^38 (~7 significant figures
  • double: ±5.0 x 10^-324 to ±1.7 x 10^308 (15-16 significant figures)
  • decimal: ±1.0 x 10^-28 to ±7.9 x 10^28 (28-29 significant figures)

@BrainSlugs83 2017-09-16 01:19:22

The difference is more than just precision. -- decimal is actually stored in decimal format (as opposed to base 2; so it won't lose or round digits due to conversion between the two numeric systems); additionally, decimal has no concept of special values such as NaN, -0, ∞, or -∞.

@cgreeno 2009-03-06 11:33:02

Precision is the main difference.

Float - 7 digits (32 bit)

Double-15-16 digits (64 bit)

Decimal -28-29 significant digits (128 bit)

Decimals have much higher precision and are usually used within financial applications that require a high degree of accuracy. Decimals are much slower (up to 20X times in some tests) than a double/float.

Decimals and Floats/Doubles cannot be compared without a cast whereas Floats and Doubles can. Decimals also allow the encoding or trailing zeros.

float flt = 1F/3;
double dbl = 1D/3;
decimal dcm = 1M/3;
Console.WriteLine("float: {0} double: {1} decimal: {2}", flt, dbl, dcm);

Result :

float: 0.3333333  
double: 0.333333333333333  
decimal: 0.3333333333333333333333333333

@Hammad Khan 2011-11-21 18:23:01

This answer needs to be corrected. Precision for Decimal is not 128 bits but infinite because the format is essentially different from float. @Skeet answer is the best. Example: 0.1 = 0.099999.... in float but in decimal it is 0.l, that is infinite precision. If you were to use 128 bits precision like in floats, you would get 0.999999....(upto 29 digits) but that is still not precise as decimal 0.1

@Erik P. 2011-11-29 21:14:43

@Thecrocodilehunter: sorry, but no. Decimal can represent all numbers that can be represented in decimal notation, but not 1/3 for example. 1.0m / 3.0m will evaluate to 0.33333333... with a large but finite number of 3s at the end. Multiplying it by 3 will not return an exact 1.0.

@Hammad Khan 2011-11-29 22:13:51

This is a fault with the number itself ( 0.3333... in this case), not its decimal representation where it is produced 100% faithfully. When you introduced an error in the number, no body can remove it (not even decimal numbers). The only way to remove error from this number is to use 1/3 not 0.333. Some calculator might take 1/3 as mid value but most of them don't. Try this: represent 0.3333 in floating point, you will end up with 0.3332999998..., this is not 0.3333 (you see the error). Now represent this in decimal it is 0.3333 (exactly as it is, no error - 100% accurate).

@Igby Largeman 2012-01-06 17:42:04

@Thecrocodilehunter: I think you're confusing accuracy and precision. They are different things in this context. Precision is the number of digits available to represent a number. The more precision, the less you need to round. No data type has infinite precision.

@Hammad Khan 2012-01-09 19:50:27

@IgbyLargeman Precision and Accuracy is used in context of measuring a value by an instrument. In this case we are not talking about any instrument. We are only talking about representing a value faithfully by decimal vs floating point. Precision does not apply here as we are not talking about consistency of measuring the same value, over and over. But Accuracy does. Accuracy of decimal point on a number that is in its range is 100%, that is infinite accuracy.

@Daniel Pryden 2012-01-10 01:49:31

@Thecrocodilehunter: You're assuming that the value that is being measured is exactly 0.1 -- that is rarely the case in the real world! Any finite storage format will conflate an infinite number of possible values to a finite number of bit patterns. For example, float will conflate 0.1 and 0.1 + 1e-8, while decimal will conflate 0.1 and 0.1 + 1e-29. Sure, within a given range, certain values can be represented in any format with zero loss of accuracy (e.g. float can store any integer up to 1.6e7 with zero loss of accuracy) -- but that's still not infinite accuracy.

@Hammad Khan 2012-01-10 12:14:44

@DanielPryden, Ok I believe float will represent 0.1 as 0.1, not as 0.1 + 1e-29. This is because the format is essentially different than float. That is why it is very slow but accurate. If you were right than decimal is useless. Remember the main problem in float if(0.1 = 0.1) this condition does not holds true when we think it should be true. In decimal it will ALWAYS be true because 0.1 will be 0.1 and nothing else. For example it will not be 0.99999999999999999999999999999.

@Daniel Pryden 2012-01-10 18:27:53

@Thecrocodilehunter: You missed my point. 0.1 is not a special value! The only thing that makes 0.1 "better" than 0.10000001 is because human beings like base 10. And even with a float value, if you initialize two values with 0.1 the same way, they will both be the same value. It's just that that value won't be exactly 0.1 -- it will be the closest value to 0.1 that can be exactly represented as a float. Sure, with binary floats, (1.0 / 10) * 10 != 1.0, but with decimal floats, (1.0 / 3) * 3 != 1.0 either. Neither is perfectly precise.

@Hammad Khan 2012-01-10 19:19:45

@DanielPryden, with decimal number, it will be exactly 0.1. Of course it is not about 0.1 only. A large number of decimals numbers has this problem. The fact is in decimal (0.1 == 0.1) will always be true. In float it may or may not be true because the actual binary value may not be exactly 0.1.

@Daniel Pryden 2012-01-10 19:29:12

@Thecrocodilehunter: You still don't understand. I don't know how to say this any more plainly: In C, if you do double a = 0.1; double b = 0.1; then a == b will be true. It's just that a and b will both not exactly equal 0.1. In C#, if you do decimal a = 1.0m / 3.0m; decimal b = 1.0m / 3.0m; then a == b will also be true. But in that case, neither of a nor b will exactly equal 1/3 -- they will both equal 0.3333.... In both cases, some accuracy is lost due to representation. You stubbornly say that decimal has "infinite" precision, which is false.

@Daniel Pryden 2012-01-10 19:33:58

@Thecrocodilehunter: Just in case you don't believe me, here's some sample code that shows that 0.1 == 0.1.

@Chibueze Opata 2012-07-15 14:36:46

This should have been marked as the correct answer. Jon Skeet's answer's a bit confusing...

@Brian 2013-01-18 18:32:30

@ChibuezeOpata: Skeet's answer discusses a completely separate difference which this answer completely ignores. Personally, I consider Skeet's answer to be more valuable, as his answer is more relevant in deciding which data type to use.

@Chibueze Opata 2013-01-18 19:46:41

@Brian They are both very valuable, and that is why I said ab initio that they are incomplete without each other. Concerning the question asked however, this answer simply goes straight to the point and tells you the essential differences. You can make almost all the deductions in Jon Skeet's answer from this one. :)

@svick 2013-06-21 16:11:36

@ChibuezeOpata No, you can't, because this answer doesn't even mention the decimal/binary distinction.

@Erik Funkenbusch 2013-06-21 18:40:18

@DanielPryden - I know this is an old issue, but maybe I can help clarify. The issue here is that Decimal numbers are 100% accurate when representing numbers that are within the precision of the decimal format. That is, not the result of pi, or 1/3, or 2/3. That's irrelevant because those numbers require greater precision than decimal can represent. If you do a calculation on a decimal value that exceeds the precision, then all bets are off. With float/double numbers that ARE within the precision of the format are not always 100% accurate. .1 for example.

@Daniel Pryden 2013-06-21 23:52:06

@MystereMan: what do you mean by "within the precision of the decimal format"? If the number you are measuring is exactly an integer raised to a power of ten, then absolutely use a decimal. Many numbers encountered in everyday life have this property (because the are discrete, not continuous, measurements), but many others do not. The correct data type for any purpose always depends on the purpose. Please don't mistake anything I'm saying here as implying that anyone should always use floats -- I'm merely arguing that one shouldn't blindly always use decimals instead.

@Daniel Pryden 2013-06-21 23:59:35

@MystereMan: I think part of your confusion is betrayed by the phrase "within the precision of the format". I don't think that makes sense -- do you mean something like "within the representable range" instead? But even that doesn't prove anything: 0.1 is not any more "within the precision" of a double than 2^53+1 is, and both can be represented equally faithfully.

@Dan Nissenbaum 2013-06-26 00:16:34

Pretty much every time the issue of precision of floating point representation (be it decimal or binary) comes up, there ensues a long conversation of comments at cross-purposes. Fundamentally this is due to the question of whether the exact value represented by the floating point representation corresponds to the same exact value in the real world. This cannot be known by looking at the representation of the number itself; it can only be known by the humans that use the representation.

@David Mårtensson 2013-06-27 14:27:53

Here are a small example code for C# (which this article is about) that visualizes the problem (using decimal & float). (0.1f == 1f/10) and (0.1m == 1m/10). The first will evaluate to false while the second will evaluate to true, even though both should evaluate to true. This is due to the fact that float cannot exactly store the value 0.1.

@supercat 2013-09-12 14:58:02

@DavidMårtensson: Why should 0.1f not equal 1f/10? Should not both evaluate to 13421773/134217728?

@David Mårtensson 2013-09-13 16:09:03

@supercat Because how float works internally 0.1f cannot be exactly represented in the internal binary format and due to the fact that calculations use more precision internally 1f/10 will not land on the same rounded value as 0.1f, hence they will not be equal.

@supercat 2013-09-13 16:29:42

@DavidMårtensson: The compile-time type of the expression 1f/10 is float. Are you saying that compilers are not required to round the result of the division to the nearest float before performing the comparison? I regard as somewhat broken the fact that one is allowed to directly compare a float to anything else, or a double to anything else other than 32-bit-or-smaller integers [I think a cast should be required] but I would consider severely broken a compiler that performed what was by the rules of the language a float/float comparison as though it were a float/double comparison.

@supercat 2013-09-13 16:36:52

@DavidMårtensson: (Incidentally, what I'd like to see would be a language with both "loose" and "strict" 32- and 64-bit floating-point types, where the strict ones would not accept any implicit conversions and the "loose" one would be defined as extending operation results to double and would generally allow implicit down-conversions to float, but would disallow direct comparisons between 32-bit and 64-bit values. I would posit that while C# will have no qualm about double d1=f1*f2; it would be rare for the programmer to actually intend that d1 might hold a float-precision result.)

@David Mårtensson 2013-09-16 07:07:45

The IEEE standard for binary floating point does not mandate strict decimal precision, see Mark Jones answer below. It is not defined by the language. If you require strict rounding you should use the decimal datatype which is a decimal floating point as Jon Skeet points out in Mehrdad's answer below. The different types have different uses and different requirements. When calculating real world values in physics for example, your original numbers are probably less precise than your compiler so the computational errors will usually have less impact that measurement errors.

@Matt 2014-02-28 18:52:15

Yet another attempt to try to hit the nail on the head: Both float and double can exactly represent fractions of the form p/q where q is a power of 2. E.g. 0.5, 3.25, 1/256, etc. decimal however can exactly represent fractions of the form p/q where q is a power of 10 (ten). See this answer. Though it is correct that decimal has more significant digits, it is misleading to leave it at that; the representation is fundamentally different than float and double which lends decimal to precise decimal calculations.

@Randall Sutton 2014-06-25 13:10:31

Precision is not the main difference. Decimal being base 10 is the main difference.

@BrainSlugs83 2015-03-14 22:44:45

-1 while the main difference between float and double is precision, the main difference between float, double, and decimal is not. It's true that decimal does have a wider precision, but more importantly, it also stores the values in a decimal-centric format, as opposed to float and double, which store their values in binary-centric format. To give an example, the number ".75" in decimal is equivalent to ".11" in binary, because one half plus one forth == three fourths. Naturally, some fractional decimal values (even within the ~7 digit range) can only be approximated by double and float.

@phoog 2015-05-27 10:27:48

@Matt decimal can exactly represent fractions of the form p/q when q is a power of 2 or a power of 5 (i.e., prime factors of 10). Consider 1/2 (0.5) and 1/5 (0.2), for example; neither denominator is a power of 10.

@phoog 2015-05-27 10:45:18

@hmd consider a floating-point base-3 system, where 1/10 (rather 1/101) is an infinitely repeating fraction: 0.00220022.... However, 1/3 is not; it is 0.1. Consider Matt's comment: Fractions that can be exactly represented in a given base are those that use the prime factors of the base. Decimal does not have infinite precision; it has 28 decimal digits of precision. If it truly had infinite precision, you would be able to represent half of 0.0000000037252902984619140625m. But you can't; dividing that by 2 gives 0.0000000018626451492309570312m instead of 0.00000000186264514923095703125

@Matt 2015-05-27 19:34:22

@phoog There is no requirement that p/q be in simplest form. In your examples, 1/2=5/10 and 1/5=2/10 and therefore have exact decimal representations. Another example is 1/20=0.05 in which the denominator is neither a power of 2, 5 or 10. You said "decimal can exactly represent fractions of the form p/q when q is a power of 2 or a power of 5". Though technically correct, this is actually more restrictive than what I said because 1/10, for example, cannot be written in the form p/q where q is a power of 2 or 5.

@phoog 2015-05-27 21:30:35

@hmd In addition to the values, like 0.1, that can be represented as decimal but not as double, there are some values that can be exactly represented as double but not decimal. Consider the fraction 1 / 2^31. The decimal representation is truncated, while the double representation is exact. The .NET string representation of the double is not exact, but the in-memory bit representation is exact. Jon Skeet has a class that will convert any double to the exact decimal string representation, which can be quite long:

@phoog 2015-05-27 21:34:07

@Matt I also oversimplified. The real requirement is that after reducing the fraction to its simplest form q is the product of a power of 2 and a power of 5; that is, q's unique prime factors must be the same as or a subset of the unique prime factors of the base. You can of course equivalently recast that as all of q`s prime factors being either a prime factor of the base or a divisor of p.

@Quality Catalyst 2017-12-16 23:37:01

Links in your answer are dead.

@JHenry 2020-02-22 18:21:50

Better answer!! =))

@daniel 2012-05-22 12:05:17

Integers, as was mentioned, are whole numbers. They can't store the point something, like .7, .42, and .007. If you need to store numbers that are not whole numbers, you need a different type of variable. You can use the double type or the float type. You set these types of variables up in exactly the same way: instead of using the word int, you type double or float. Like this:

float myFloat;
double myDouble;

(float is short for "floating point", and just means a number with a point something on the end.)

The difference between the two is in the size of the numbers that they can hold. For float, you can have up to 7 digits in your number. For doubles, you can have up to 16 digits. To be more precise, here's the official size:

float:  1.5 × 10^-45  to 3.4 × 10^38  
double: 5.0 × 10^-324 to 1.7 × 10^308

float is a 32-bit number, and double is a 64-bit number.

Double click your new button to get at the code. Add the following three lines to your button code:

double myDouble;
myDouble = 0.007;

Halt your program and return to the coding window. Change this line:

myDouble = 0.007;
myDouble = 12345678.1234567;

Run your programme and click your double button. The message box correctly displays the number. Add another number on the end, though, and C# will again round up or down. The moral is if you want accuracy, be careful of rounding!

@BrainSlugs83 2017-09-16 01:09:52

The "point something" you mentioned is generally referred to as "the fractional part" of a number. "Floating point" does not mean "a number with a point something on the end"; but instead "Floating Point" distinguishes the type of number, as opposed to a "Fixed Point" number (which can also store a fractional value); the difference is whether the precision is fixed, or floating. -- Floating point numbers give you a much bigger dynamic range of values (Min and Max), at the cost of precision, whereas a fixed point numbers give you a constant amount of precision at the cost of range.

@Mike Gledhill 2012-04-16 09:23:44

This has been an interesting thread for me, as today, we've just had a nasty little bug, concerning decimal having less precision than a float.

In our C# code, we are reading numeric values from an Excel spreadsheet, converting them into a decimal, then sending this decimal back to a Service to save into a SQL Server database.

Microsoft.Office.Interop.Excel.Range cell = …
object cellValue = cell.Value2;
if (cellValue != null)
    decimal value = 0;
    Decimal.TryParse(cellValue.ToString(), out value);

Now, for almost all of our Excel values, this worked beautifully. But for some, very small Excel values, using decimal.TryParse lost the value completely. One such example is

  • cellValue = 0.00006317592

  • Decimal.TryParse(cellValue.ToString(), out value); // would return 0

The solution, bizarrely, was to convert the Excel values into a double first, and then into a decimal:

Microsoft.Office.Interop.Excel.Range cell = …
object cellValue = cell.Value2;
if (cellValue != null)
    double valueDouble = 0;
    double.TryParse(cellValue.ToString(), out valueDouble);
    decimal value = (decimal) valueDouble;

Even though double has less precision than a decimal, this actually ensured small numbers would still be recognised. For some reason, double.TryParse was actually able to retrieve such small numbers, whereas decimal.TryParse would set them to zero.

Odd. Very odd.

@micahtan 2012-08-27 23:57:49

Out of curiosity, what was the raw value of cellValue.ToString()? Decimal.TryParse("0.00006317592", out val) seems to work...

@weston 2013-05-22 14:19:05

-1 Don't get me wrong, if true, it's very interesting but this is a separate question, it's certainly not an answer to this question.

@SergioL 2014-10-15 20:44:52

Maybe because the Excel cell was returning a double and ToString() value was "6.31759E-05" therefore the decimal.Parse() didn't like the notation. I bet if you checked the return value of Decimal.TryParse() it would have been false.

@Robino 2015-05-20 15:52:39

@weston Answers often complement other answers by filling in nuances they have missed. This answer highlights a difference in terms of parsing. It is very much an answer to the question!

@BrainSlugs83 2017-09-16 01:15:16

Er... decimal.Parse("0.00006317592") works -- you've got something else going on. -- Possibly scientific notation?

@Robert McKee 2019-10-25 14:42:51

decimal.Parse("0.00006317592") works, but decimal.Parse(0.00006317592.ToString()) does not as @SergioL suggested. 0.00006317592.ToString() becomes 6.317592E-05 and decimal.Parse does not like that.

@GntS 2018-02-10 08:47:40

In simple words:

  1. The Decimal, Double, and Float variable types are different in the way that they store the values.
  2. Precision is the main difference (Notice that this is not the single difference) where float is a single precision (32 bit) floating point data type, double is a double precision (64 bit) floating point data type and decimal is a 128-bit floating point data type.
  3. The summary table:

    Type       Bits    Have up to                   Approximate Range 
    float      32      7 digits                     -3.4 × 10 ^ (38)   to +3.4 × 10 ^ (38)
    double     64      15-16 digits                 ±5.0 × 10 ^ (-324) to ±1.7 × 10 ^ (308)
    decimal    128     28-29 significant digits     ±7.9 x 10 ^ (28) or (1 to 10 ^ (28)
You can read more here, Float, Double, and Decimal.

@Mark Dickinson 2018-02-10 12:15:36

What does this answer add that isn't already covered in the existing answers? BTW, your "or" in the "decimal" line is incorrect: the slash in the web page that you're copying from indicates division rather than an alternative.

@Mark Dickinson 2018-02-10 12:28:21

And I'd dispute strongly that precision is the main difference. The main difference is the base: decimal floating-point versus binary floating-point. That difference is what makes Decimal suitable for financial applications, and it's the main criterion to use when deciding between Decimal and Double. It's rare that Double precision isn't enough for scientific applications, for example (and Decimal is often unsuitable for scientific applications because of its limited range).

@Jon Skeet 2009-03-06 11:56:02

float and double are floating binary point types. In other words, they represent a number like this:


The binary number and the location of the binary point are both encoded within the value.

decimal is a floating decimal point type. In other words, they represent a number like this:


Again, the number and the location of the decimal point are both encoded within the value – that's what makes decimal still a floating point type instead of a fixed point type.

The important thing to note is that humans are used to representing non-integers in a decimal form, and expect exact results in decimal representations; not all decimal numbers are exactly representable in binary floating point – 0.1, for example – so if you use a binary floating point value you'll actually get an approximation to 0.1. You'll still get approximations when using a floating decimal point as well – the result of dividing 1 by 3 can't be exactly represented, for example.

As for what to use when:

  • For values which are "naturally exact decimals" it's good to use decimal. This is usually suitable for any concepts invented by humans: financial values are the most obvious example, but there are others too. Consider the score given to divers or ice skaters, for example.

  • For values which are more artefacts of nature which can't really be measured exactly anyway, float/double are more appropriate. For example, scientific data would usually be represented in this form. Here, the original values won't be "decimally accurate" to start with, so it's not important for the expected results to maintain the "decimal accuracy". Floating binary point types are much faster to work with than decimals.

@Mingwei Samuel 2014-08-13 21:50:33

float/double usually do not represent numbers as 101.101110, normally it is represented as something like 1101010 * 2^(01010010) - an exponent

@Jon Skeet 2014-08-13 21:57:58

@Hazzard: That's what the "and the location of the binary point" part of the answer means.

@Brett Caswell 2015-02-03 15:48:58

I'm surprised it hasn't been said already, float is a C# alias keyword and isn't a .Net type. it's System.Single.. single and double are floating binary point types.

@BKSpurgeon 2015-11-26 03:00:05

wait....isn't a decimal represented in 1s and 0s eventually? I thought computers could only work in binary form. so then a decimal is eventually a binary type isn't it?

@Jon Skeet 2015-11-26 07:20:16

@BKSpurgeon: Well, only in the same way that you can say that everything is a binary type, at which point it becomes a fairly useless definition. Decimal is a decimal type in that it's a number represented as an integer significand and a scale, such that the result is significand * 10^scale, whereas float and double are significand * 2^scale. You take a number written in decimal, and move the decimal point far enough to the right that you've got an integer to work out the significand and the scale. For float/double you'd start with a number written in binary.

@Ehsan 2016-02-26 02:52:45

The other aspect is conversion between these data types: Single and Double Data types -use "Fuzzy" comparison -conversion from double to single loses precision -conversion from single to double creates inaccuracy -conversion to/from decimal introduces rounding errors between bases -Create a team consistence[Style Guide] on the data type you're using and watch for conversion

@David 2016-08-29 15:08:54

Another difference: float 32-bit; double 64-bit; and decimal 128-bit.

@phuzi 2019-06-20 08:27:37

@BrettCaswell double is an alias to System.Double, decimal is a alias to System.Double and string is an alias to System.String.

@Andrzej Gis 2019-06-21 20:05:26

@JonSkeet For floats/doubles we get: Console.WriteLine(0.1 + 0.2 == 0.3); // false. If I get it right, it's not equal because of the conversion from decimal notation we use in code to the binary notation used in memory. Can we do it the other way though? Initialize decimal variables with binary notation in code and then get a similar mismatch?

@Jon Skeet 2019-06-22 06:16:33

@AndrzejGis: No, because every binary value is exactly representable in decimal. (Basically because 2 is a factor of 10.)

@Prabu 2019-09-18 15:28:12

@JonSkeet What are some examples of more artefacts of nature? Would you consider the speed (mph) or consumption (litres/minute) or latitude/longitude good candidates for a double?

@Jon Skeet 2019-09-18 15:34:52

@Prabu: Yes, those feel pretty natural to me.

@zdimension 2019-12-22 13:17:12

@phuzi isn't decimal an alias to System.Decimal instead of System.Double?

@phuzi 2019-12-31 00:09:44

@zdimension Ah, nuts. Yes it is. Taken a while for someone to notice the typo though.

@schlebe 2017-02-23 13:05:31

The problem with all these types is that a certain imprecision subsists AND that this problem can occur with small decimal numbers like in the following example

Dim fMean as Double = 1.18
Dim fDelta as Double = 0.08
Dim fLimit as Double = 1.1

If fMean - fDelta < fLimit Then
    bLower = True
    bLower = False
End If

Question: Which value does bLower variable contain ?

Answer: On a 32 bit machine bLower contains TRUE !!!

If I replace Double by Decimal, bLower contains FALSE which is the good answer.

In double, the problem is that fMean-fDelta = 1.09999999999 that is lower that 1.1.

Caution: I think that same problem can certainly exists for other number because Decimal is only a double with higher precision and the precision has always a limit.

In fact, Double, Float and Decimal correspond to BINARY decimal in COBOL !

It is regrettable that other numeric types implemented in COBOL don't exist in .Net. For those that don't know COBOL, there exist in COBOL following numeric type

BINARY or COMP like float or double or decimal
PACKED-DECIMAL or COMP-3 (2 digit in 1 byte)
ZONED-DECIMAL (1 digit in 1 byte) 

@tomosius 2016-04-22 15:18:17

I won't reiterate tons of good (and some bad) information already answered in other answers and comments, but I will answer your followup question with a tip:

When would someone use one of these?

Use decimal for counted values

Use float/double for measured values

Some examples:

  • money (do we count money or measure money?)

  • distance (do we count distance or measure distance? *)

  • scores (do we count scores or measure scores?)

We always count money and should never measure it. We usually measure distance. We often count scores.

* In some cases, what I would call nominal distance, we may indeed want to 'count' distance. For example, maybe we are dealing with country signs that show distances to cities, and we know that those distances never have more than one decimal digit (xxx.x km).

@John Henckel 2019-04-04 18:55:33

I really like this answer, especially the question "do we count or measure money?" However, other than money, I can't think of anything that is "counted" that is not simply integer. I have seen some applications that use decimal simply because double has too few significant digits. In other words, decimal might be used because C# does not have a quadruple type‌​mat

@yoyo 2014-05-16 16:21:29

For applications such as games and embedded systems where memory and performance are both critical, float is usually the numeric type of choice as it is faster and half the size of a double. Integers used to be the weapon of choice, but floating point performance has overtaken integer in modern processors. Decimal is right out!

@BrainSlugs83 2017-09-16 01:22:47

Pretty much all modern systems, even cell phones, have hardware support for double; and if you game has even simple physics, you will notice a big difference between double and float. (For example, calculating the velocity / friction in a simple Asteroids clone, doubles allow acceleration to flow much more fluidly than float. -- Seems like it shouldn't matter, but it totally does.)

@yoyo 2017-09-22 17:53:08

Doubles are also double the size of floats, meaning you need to chew through twice as much data, which hurts your cache performance. As always, measure and proceed accordingly.

@user3776645 2014-12-21 18:50:12

The main difference between each of these is the precision.

float is a 32-bit number, double is a 64-bit number and decimal is a 128-bit number.

@GorkemHalulu 2015-01-02 13:12:39

No one has mentioned that

In default settings, Floats (System.Single) and doubles (System.Double) will never use overflow checking while Decimal (System.Decimal) will always use overflow checking.

I mean

decimal myNumber = decimal.MaxValue;
myNumber += 1;

throws OverflowException.

But these do not:

float myNumber = float.MaxValue;
myNumber += 1;


double myNumber = double.MaxValue;
myNumber += 1;

@supercat 2015-01-14 00:21:28

float.MaxValue+1 == float.MaxValue, just as decimal.MaxValue+0.1D == decimal.MaxValue. Perhaps you meant something like float.MaxValue*2?

@GorkemHalulu 2015-01-14 06:12:08

@supercar But it is not true that decimal.MaxValue + 1 == decimal.MaxValue

@GorkemHalulu 2015-01-14 06:19:45

@supercar decimal.MaxValue + 0.1m == decimal.MaxValue ok

@supercat 2015-01-14 16:15:01

The System.Decimal throws an exception just before it becomes unable to distinguish whole units, but if an application is supposed to be dealing with e.g. dollars and cents, that could be too late.

@warnerl 2014-09-30 08:22:24

The Decimal, Double, and Float variable types are different in the way that they store the values. Precision is the main difference where float is a single precision (32 bit) floating point data type, double is a double precision (64 bit) floating point data type and decimal is a 128-bit floating point data type.

Float - 32 bit (7 digits)

Double - 64 bit (15-16 digits)

Decimal - 128 bit (28-29 significant digits)

More about...the difference between Decimal, Float and Double

@CharithJ 2011-08-29 00:06:27

float 7 digits of precision

double has about 15 digits of precision

decimal has about 28 digits of precision

If you need better accuracy, use double instead of float. In modern CPUs both data types have almost the same performance. The only benifit of using float is they take up less space. Practically matters only if you have got many of them.

I found this is interesting. What Every Computer Scientist Should Know About Floating-Point Arithmetic

@supercat 2014-05-29 17:57:34

@RogerLipscombe: I would consider double proper in accounting applications in those cases (and basically only those cases) where no integer type larger than 32 bits was available, and the double was being used as though it were a 53-bit integer type (e.g. to hold a whole number of pennies, or a whole number of hundredths of a cent). Not much use for such things nowadays, but many languages gained the ability to use double-precision floating-point values long before they gained 64-bit (or in some cases even 32-bit!) integer math.

@saille 2015-01-15 03:16:10

Your answer implies precision is the only difference between these data types. Given binary floating point arithmetic is typically implemented in hardware FPU, performance is a significant difference. This may be inconsequential for some applications, but is critical for others.

@BrainSlugs83 2015-03-14 22:50:17

@supercat double is never proper in accounting applications. Because Double can only approximate decimal values (even within the range of its own precision). This is because double stores the values in a base-2 (binary)-centric format.

@supercat 2015-03-15 19:45:28

@BrainSlugs83: Use of floating-point types to hold non-whole-number quantities would be improper, but it was historically very common for languages to have floating-point types that could precisely represent larger whole-number values than their integer types could represent. Perhaps the most extreme example was Turbo-87 whose only integer types were limited to -32768 to +32767, but whose Real could IIRC represent values up to 1.8E+19 with unit precision. I would think it would be much saner for an accounting application to use Real to represent a whole number of pennies than...

@supercat 2015-03-15 19:47:02

...for it to try to perform multi-precision math using a bunch of 16-bit values. For most other languages the difference wasn't that extreme, but for a long time it has been very common for languages not to have any integer type that went beyond 4E9 but have a double type which had unit accuracy up to 9E15. If one needs to store whole numbers which are bigger than the largest available integer type, using double is apt to be simpler and more efficient than trying to fudge multi-precision math, especially given that while processors have instructions to perform 16x16->32 or...

@supercat 2015-03-15 19:51:01

...32x32->64 multiplication, programming languages generally don't.

@Mark Jones 2011-04-13 13:55:01

The Decimal structure is strictly geared to financial calculations requiring accuracy, which are relatively intolerant of rounding. Decimals are not adequate for scientific applications, however, for several reasons:

  • A certain loss of precision is acceptable in many scientific calculations because of the practical limits of the physical problem or artifact being measured. Loss of precision is not acceptable in finance.
  • Decimal is much (much) slower than float and double for most operations, primarily because floating point operations are done in binary, whereas Decimal stuff is done in base 10 (i.e. floats and doubles are handled by the FPU hardware, such as MMX/SSE, whereas decimals are calculated in software).
  • Decimal has an unacceptably smaller value range than double, despite the fact that it supports more digits of precision. Therefore, Decimal can't be used to represent many scientific values.

@James Moore 2016-04-06 16:59:08

If you're doing financial calculations, you absolutely have to roll your own datatypes or find a good library that matches your exact needs. Accuracy in a financial setting is defined by (human) standards bodies and they have very specific localized (both in time and geography) rules about how to do calculations. Things like correct rounding aren't captured in the simple numeric datatypes in .Net. The ability to do calculations is only a very small part of the puzzle.

@xport 2010-07-29 07:21:10

  1. Double and float can be divided by integer zero without an exception at both compilation and run time.
  2. Decimal cannot be divided by integer zero. Compilation will always fail if you do that.

@BrainSlugs83 2011-06-23 19:29:42

They sure can! They also also have a couple of "magic" values such as Infinity, Negative Infinity, and NaN (not a number) which make it very useful for detecting vertical lines while computing slopes... Further, if you need to decide between calling float.TryParse, double.TryParse, and decimal.TryParse (to detect if a string is a number, for example), I recommend using double or float, as they will parse "Infinity", "-Infinity", and "NaN" properly, whereas decimal will not.

@Drew Noakes 2016-11-18 00:24:11

Compilation only fails if you attempt to divide a literal decimal by zero (CS0020), and the same is true of integral literals. However if a runtime decimal value is divided by zero, you'll get an exception not a compile error.

@Winter 2017-05-17 01:00:00

@BrainSlugs83 However, you might not want to parse "Infinity" or "NaN" depending on the context. Seems like a good exploit for user input if the developper isn't rigourous enough.

Related Questions

Sponsored Content

30 Answered Questions

[SOLVED] Is floating point math broken?

33 Answered Questions

[SOLVED] How to round a number to n decimal places in Java

19 Answered Questions

[SOLVED] What's the difference between struct and class in .NET?

12 Answered Questions

27 Answered Questions

[SOLVED] How do I parse a string to a float or int?

12 Answered Questions

[SOLVED] What is the difference between float and double?

64 Answered Questions

[SOLVED] What is the difference between String and string in C#?

25 Answered Questions

[SOLVED] Limiting floats to two decimal points

32 Answered Questions

[SOLVED] What is the difference between const and readonly in C#?

Sponsored Content