By Jon Schneider


2008-09-19 16:53:10 8 Comments

In several modern programming languages (including C++, Java, and C#), the language allows integer overflow to occur at runtime without raising any kind of error condition.

For example, consider this (contrived) C# method, which does not account for the possibility of overflow/underflow. (For brevity, the method also doesn't handle the case where the specified list is a null reference.)

//Returns the sum of the values in the specified list.
private static int sumList(List<int> list)
{
    int sum = 0;
    foreach (int listItem in list)
    {
        sum += listItem;
    }
    return sum;
}

If this method is called as follows:

List<int> list = new List<int>();
list.Add(2000000000);
list.Add(2000000000);
int sum = sumList(list);

An overflow will occur in the sumList() method (because the int type in C# is a 32-bit signed integer, and the sum of the values in the list exceeds the value of the maximum 32-bit signed integer). The sum variable will have a value of -294967296 (not a value of 4000000000); this most likely is not what the (hypothetical) developer of the sumList method intended.

Obviously, there are various techniques that can be used by developers to avoid the possibility of integer overflow, such as using a type like Java's BigInteger, or the checked keyword and /checked compiler switch in C#.

However, the question that I'm interested in is why these languages were designed to by default allow integer overflows to happen in the first place, instead of, for example, raising an exception when an operation is performed at runtime that would result in an overflow. It seems like such behavior would help avoid bugs in cases where a developer neglects to account for the possibility of overflow when writing code that performs an arithmetic operation that could result in overflow. (These languages could have included something like an "unchecked" keyword that could designate a block where integer overflow is permitted to occur without an exception being raised, in those cases where that behavior is explicitly intended by the developer; C# actually does have this.)

Does the answer simply boil down to performance -- the language designers didn't want their respective languages to default to having "slow" arithmetic integer operations where the runtime would need to do extra work to check whether an overflow occurred, on every applicable arithmetic operation -- and this performance consideration outweighed the value of avoiding "silent" failures in the case that an inadvertent overflow occurs?

Are there other reasons for this language design decision as well, other than performance considerations?

8 comments

@Doug T. 2008-09-19 17:01:33

You work under the assumption that integer overflow is always undesired behavior.

Sometimes integer overflow is desired behavior. One example I've seen is representation of an absolute heading value as a fixed point number. Given an unsigned int, 0 is 0 or 360 degrees and the max 32 bit unsigned integer (0xffffffff) is the biggest value just below 360 degrees.

int main()
{
    uint32_t shipsHeadingInDegrees= 0;

    // Rotate by a bunch of degrees
    shipsHeadingInDegrees += 0x80000000; // 180 degrees
    shipsHeadingInDegrees += 0x80000000; // another 180 degrees, overflows 
    shipsHeadingInDegrees += 0x80000000; // another 180 degrees

    // Ships heading now will be 180 degrees
    cout << "Ships Heading Is" << (double(shipsHeadingInDegrees) / double(0xffffffff)) * 360.0 << std::endl;

}

There are probably other situations where overflow is acceptable, similar to this example.

@Kibbee 2008-09-19 18:33:06

There are probably many more instances where integer overflow yields wrong results then when it yeilds the correct result. In fact, the only time it actually generates correct results are when it's the expected behaviour, and it is actually being used as a feature, such as in your example.

@OwenP 2008-09-19 21:46:56

@Kibbee I'd be willing to venture that in the case where integer overflow can lead to error, it's more an indicator that you haven't done proper range checking than a failure of the programming language. In the cases where it's not expected, your code should check for it.

@Dan 2008-12-17 15:49:43

Not to mention the inherent hazard of relying on a platform-specific low-level detail like this. What happens if this is recompiled on a 64-bit platform? Eek.

@sleske 2010-02-23 01:26:25

@Dan: Note that the code uses uint32_t. That is guaranteed to be 32bit (hence the name :-) ). So the author did think of that point.

@Dan 2010-02-26 18:34:20

@sleske: If you check the edits, you'll see that Doug's original post used "unsigned int"...hence my comment. Perhaps my comment caused him to correct his code...

@supercat 2012-12-23 18:57:49

While there are places where it's useful for have a type which behaves as a discrete algebraic field with specific wrapping behavior, I would posit that it would be more useful to have a type Int32 which is checked, and a type WrappingInt32 which is not (likewise for other sizes). Code could then easily obtain the proper in each circumstance.

@markmnl 2014-12-17 02:33:56

I find myself wanting wrapping more often then not, e.g. sequence counters I expect to loop (so can just increment them), database does not have unsigned type so casting just works, e.g. can store: (long)ulong.MaxValue and get it again with (ulong)row[I].

@wintermute 2018-01-16 06:43:16

Another example of the integer overflow as a desired behavior is subtraction, i.e. addition of a positive number with the negative number of the same absolute value (so that their sum is 0).

@Steve Jessop 2008-09-19 18:12:24

C/C++ never mandate trap behaviour. Even the obvious division by 0 is undefined behaviour in C++, not a specified kind of trap.

The C language doesn't have any concept of trapping, unless you count signals.

C++ has a design principle that it doesn't introduce overhead not present in C unless you ask for it. So Stroustrup would not have wanted to mandate that integers behave in a way which requires any explicit checking.

Some early compilers, and lightweight implementations for restricted hardware, don't support exceptions at all, and exceptions can often be disabled with compiler options. Mandating exceptions for language built-ins would be problematic.

Even if C++ had made integers checked, 99% of programmers in the early days would have turned if off for the performance boost...

@Jay Bazuzi 2008-09-20 17:15:15

In C#, it was a question of performance. Specifically, out-of-box benchmarking.

When C# was new, Microsoft was hoping a lot of C++ developers would switch to it. They knew that many C++ folks thought of C++ as being fast, especially faster than languages that "wasted" time on automatic memory management and the like.

Both potential adopters and magazine reviewers are likely to get a copy of the new C#, install it, build a trivial app that no one would ever write in the real world, run it in a tight loop, and measure how long it took. Then they'd make a decision for their company or publish an article based on that result.

The fact that their test showed C# to be slower than natively compiled C++ is the kind of thing that would turn people off C# quickly. The fact that your C# app is going to catch overflow/underflow automatically is the kind of thing that they might miss. So, it's off by default.

I think it's obvious that 99% of the time we want /checked to be on. It's an unfortunate compromise.

@supercat 2012-12-23 18:55:47

How much would integer overflow checking have affected any realistic benchmarks? It seems to me there are many other things in C# which are much worse (e.g. given a field readonly rect foo;, a statement like DoSomething(foo.X, foo.Y);` requires making a copy of foo, calling its X accessor method, making another copy of foo, and calling its Y accessor method. Pretty massive overhead--enough to make integer-overflow checking seem pretty trivial by comparison.

@Jay Bazuzi 2012-12-24 01:27:28

@supercat: In most real world code, turning on /checked by default would not be measurably slower, and would catch a certain class of important bugs. However, as I said in my answer, one of the first things an early reviewer of C# might do is test the performance of integer arithmetic in a tight loop.

@Jay Bazuzi 2012-12-24 01:38:14

Remember that C# is 12 years old now. Back then, many programmers believed "C++ is fast, Java is slow, C# is like Java". Today, I think, most programmers see time-to-market and managing complexity as having a bigger impact on business success that tight-loop-optimization.

@markmnl 2014-12-17 02:30:13

Was the reason perf though? I personally find myself often relying on the behaviour (i.e. unchecked by default), e.g. sequence counters I expect to loop or database does not have type for unsigned value but it is unsigned so casting just works...

@Eclipse 2008-09-19 16:59:59

Backwards compatibility is a big one. With C, it was assumed that you were paying enough attention to the size of your datatypes that if an over/underflow occurred, that that was what you wanted. Then with C++, C# and Java, very little changed with how the "built-in" data types worked.

@devinmoore 2008-09-19 16:59:17

My understanding of why errors would not be raised by default at runtime boils down to the legacy of desiring to create programming languages with ACID-like behavior. Specifically, the tenet that anything that you code it to do (or don't code), it will do (or not do). If you didn't code some error handler, then the machine will "assume" by virtue of no error handler, that you really want to do the ridiculous, crash-prone thing you're telling it to do.

(ACID reference: http://en.wikipedia.org/wiki/ACID)

@Stephen C 2009-09-21 12:28:21

I don't see any connection between ACID properties and what you are describing. Please explain the connection ...

@Dima 2008-09-19 16:57:42

Because checking for overflow takes time. Each primitive mathematical operation, which normally translates into a single assembly instruction would have to include a check for overflow, resulting in multiple assembly instructions, potentially resulting in a program that is several times slower.

@Rob Walker 2008-09-19 16:56:57

It is likely 99% performance. On x86 would have to check the overflow flag on every operation which would be a huge performance hit.

The other 1% would cover those cases where people are doing fancy bit manipulations or being 'imprecise' in mixing signed and unsigned operations and want the overflow semantics.

@David Hill 2008-09-19 16:55:49

I think performance is a pretty good reason. If you consider every instruction in a typical program that increments an integer, and if instead of the simple op to add 1, it had to check every time if adding 1 would overflow the type, then the cost in extra cycles would be pretty severe.

@Thilo 2017-03-24 04:28:51

Doesn't the CPU give you a flag back when the operation overflowed? (Of course, checking that flag also takes time).

@phuclv 2017-08-14 07:59:33

@Thilo there are CPUs without any flags like MIPS. Even doing simple big int operations on them is a slight pain

Related Questions

Sponsored Content

31 Answered Questions

[SOLVED] How do I detect unsigned integer multiply overflow?

1 Answered Questions

[SOLVED] Integer overflow trapping with LLVM?

2 Answered Questions

2 Answered Questions

[SOLVED] Java - Check for Overflow by default

5 Answered Questions

13 Answered Questions

[SOLVED] Why must we define both == and != in C#?

  • 2011-08-02 18:37:13
  • Stefan Dragnev
  • 12042 View
  • 341 Score
  • 13 Answer
  • Tags:   c# language-design

0 Answered Questions

1 Answered Questions

[SOLVED] Will an integer overflow in Python?

1 Answered Questions

Changing equation to extend integer overflow

0 Answered Questions

Integer overflow: Why doesn't something similar to "checked" from C# exist in C?

  • 2016-02-13 20:32:54
  • Markus Weninger
  • 57 View
  • 1 Score
  • 0 Answer
  • Tags:   c integer-overflow

Sponsored Content