By Rick Teachey

2015-05-06 15:32:43 8 Comments

It is my understanding that the range() function, which is actually an object type in Python 3, generates its contents on the fly, similar to a generator.

This being the case, I would have expected the following line to take an inordinate amount of time, because in order to determine whether 1 quadrillion is in the range, a quadrillion values would have to be generated:

1000000000000000 in range(1000000000000001)

Furthermore: it seems that no matter how many zeroes I add on, the calculation more or less takes the same amount of time (basically instantaneous).

I have also tried things like this, but the calculation is still almost instant:

1000000000000000000000 in range(0,1000000000000000000001,10) # count by tens

If I try to implement my own range function, the result is not so nice!!

def my_crappy_range(N):
    i = 0
    while i < N:
        yield i
        i += 1

What is the range() object doing under the hood that makes it so fast?

Martijn Pieters' answer was chosen for its completeness, but also see abarnert's first answer for a good discussion of what it means for range to be a full-fledged sequence in Python 3, and some information/warning regarding potential inconsistency for __contains__ function optimization across Python implementations. abarnert's other answer goes into some more detail and provides links for those interested in the history behind the optimization in Python 3 (and lack of optimization of xrange in Python 2). Answers by poke and by wim provide the relevant C source code and explanations for those who are interested.


@Sanan Fataliyev 2018-12-14 08:19:00

Here is implementation in C#. You can see how Contains works in O(1) time.

public struct Range
    private readonly int _start;
    private readonly int _stop;
    private readonly int _step;

    // other members omitted for brevity

    public bool Contains(int number)
        // precheck - if the number is not in a valid step point, return false
        // for example, if start=5, step=10, stop=1000, it is obvious that 163 is not in this range (due to remainder)

        if ((_start % _step + _step) % _step != (number % _step + _step) % _step)
            return false;

        // with the help of step sign, we can check borders in linear manner
        int s = Math.Sign(_step);

        // no need if/else to handle both cases - negative and positive step    
        return number * s >= _start * s && number * s < _stop * s;

@Sławomir Lenart 2018-03-16 10:47:58

It's all about a lazy approach to the evaluation and some extra optimization of range. Values in ranges don't need to be computed until real use, or even further due to extra optimization.

By the way, your integer is not such big, consider sys.maxsize

sys.maxsize in range(sys.maxsize) is pretty fast

due to optimization - it's easy to compare given integer just with min and max of range.


float(sys.maxsize) in range(sys.maxsize) is pretty slow.

(in this case, there is no optimization in range, so if python receives unexpected float, python will compare all numbers)

You should be aware of an implementation detail but should not be relied upon, because this may change in the future.

@RBF06 2019-01-15 16:56:40


The object returned by range() is actually a range object. This object implements the iterator interface so you can iterate through its values sequentially, just like a generator, but it also implements the __contains__ interface which is actually what gets called when an object appears on the right hand side of the in operator. The __contains__() method returns a bool of whether or not the item is in the object. Since range objects know their bounds and stride, this is very easy to implement in O(1).

@Martijn Pieters 2015-05-06 15:33:58

The Python 3 range() object doesn't produce numbers immediately; it is a smart sequence object that produces numbers on demand. All it contains is your start, stop and step values, then as you iterate over the object the next integer is calculated each iteration.

The object also implements the object.__contains__ hook, and calculates if your number is part of its range. Calculating is a O(1) constant time operation. There is never a need to scan through all possible integers in the range.

From the range() object documentation:

The advantage of the range type over a regular list or tuple is that a range object will always take the same (small) amount of memory, no matter the size of the range it represents (as it only stores the start, stop and step values, calculating individual items and subranges as needed).

So at a minimum, your range() object would do:

class my_range(object):
    def __init__(self, start, stop=None, step=1):
        if stop is None:
            start, stop = 0, start
        self.start, self.stop, self.step = start, stop, step
        if step < 0:
            lo, hi = stop, start
            lo, hi = start, stop
        self.length = ((hi - lo - 1) // abs(step)) + 1

    def __iter__(self):
        current = self.start
        if self.step < 0:
            while current > self.stop:
                yield current
                current += self.step
            while current < self.stop:
                yield current
                current += self.step

    def __len__(self):
        return self.length

    def __getitem__(self, i):
        if i < 0:
            i += self.length
        if 0 <= i < self.length:
            return self.start + i * self.step
        raise IndexError('Index out of range: {}'.format(i))

    def __contains__(self, num):
        if self.step < 0:
            if not (self.stop < num <= self.start):
                return False
            if not (self.start <= num < self.stop):
                return False
        return (num - self.start) % self.step == 0

This is still missing several things that a real range() supports (such as the .index() or .count() methods, hashing, equality testing, or slicing), but should give you an idea.

I also simplified the __contains__ implementation to only focus on integer tests; if you give a real range() object a non-integer value (including subclasses of int), a slow scan is initiated to see if there is a match, just as if you use a containment test against a list of all the contained values. This was done to continue to support other numeric types that just happen to support equality testing with integers but are not expected to support integer arithmetic as well. See the original Python issue that implemented the containment test.

@Rick Teachey 2015-05-06 15:39:05

Ok, so, a calculation of range(1000000000)[index_number] for example, might be something like: return start + step * index_number?

@Martijn Pieters 2015-05-06 15:40:27

@RickTeachey: yup. There is no need to produce any intermediary number if you can just calculate it.

@Veedrac 2015-05-06 15:50:34

I think this is a little unclear. range objects don't yield elements - they produce an iterator that yields elements. range is a virtual sequence.

@Martijn Pieters 2015-05-06 15:51:21

@Veedrac: I didn't want to make the sentence more laborious and go into the details of iterators vs iterables. I'll see if I can retool it a little.

@Lucretiel 2015-05-06 20:55:44

Fun fact: because you have a working implementation of __getitem__ and __len__, the __iter__ implementation is actually unnecessary.

@abarnert 2015-05-06 22:01:03

@Lucretiel: In Python 2.3, a special xrangeiterator was added specifically because that wasn't fast enough. And then somewhere in 3.x (I'm not sure if it was 3.0 or 3.2) it was tossed and they use the same listiterator type that list uses.

@Nick T 2015-05-06 22:36:06

@Lucretiel don't you need to inherit from for that to work?

@Nick T 2015-05-06 22:43:40

@Lucretiel hmm, guess you don't: It probably makes it faster in some cases though?

@Martijn Pieters 2015-05-06 23:18:00

@NickT: no, merely implementing the sequence protocol is enough; see the iter() function docs.

@user380772 2015-05-07 04:03:15

@MartijnPieters: How is that Calculation "a O(1) constant time operation"? For positive integers, I don't see any way to evaluate N in range(0,N+1,D) significantly faster than the time it takes to divide N by D.

@stefan.schwetschke 2015-05-07 07:12:09

@MartijnPieters comparison of two arbitrary numbers m and n (numbers of arbitrary precision) should be O(log min(m,n)). O(1) would only be true, when there is a hard coded upper limit for the size and precision of m and n.

@Martijn Pieters 2015-05-07 09:09:48

@stefan.schwetschke: right, when the patch was originally written, ranges only supported start and stop values up to sys.maxsize, so there was an upper bound. Compared to len(range_object), the number comparisons / modulus is near enough constant to not matter for the analysis here.

@Cody Piersall 2015-05-08 15:15:13

I would define the constructor as def __init__(self, *start_stop_step) and parse it out from there; the way the arguments are labelled now are now are kind of confusing. Nevertheless, +1; you still definitely explained the behavior.

@Martijn Pieters 2015-05-11 07:48:03

@Fraxtil: see the comments above, you were not the first to point this out. :-) Your n there is bits in the integers, not the length of the range, and even then we are talking about a new C integer every 30 bits of Python integer value. As such, the calculations are near enough to O(1) when focusing on the size of the range, not on the bit size of the integers involved.

@abarnert 2015-05-16 09:22:40

@CodyPiersall: Unfortunately, that's the signature of the real class's initializer. range is older than *args (much less the argclinic API that lets C-API functions have complete Python signatures). A few other old functions (and a few newer functions, like xrange, slice, and itertools.islice, for consistency) work the same way, but for the most part, Guido and the rest of the core devs seem to agree with you. The 2.0+ docs even describe range and friends as if they were C++-style overloads rather than show the actual confusing signature.

@abarnert 2015-05-16 09:25:15

@CodyPiersall: Actually, here's a quote from Guido the argclinic discussion, when Nick Coghlan came up with a way to allow defining range unambiguously: "Please don't make it easier for people to copy my worst design decision." So, I'm pretty sure he agrees that range is confusing as written.

@Rick Teachey 2017-07-06 11:54:52

I enjoyed your interview on Talk Python to Me. Thanks for mentioning this question that was a fun discussion.

@jfs 2017-09-02 06:56:43

here's __getitem__ and __contains__ pure Python implementation that passes all tests for Python 3 range().

@user202729 2018-07-18 02:54:30

There are enough comments here complaining about the O(log n) instead of O(1) that I think that should be edited into the answer. Or "O(1) arithmetic operations".

@Martijn Pieters 2018-07-18 11:41:36

@user202729: no, I disagree there. That only comes into play for values > sys.maxsize and due to the extremely low constant cost of the C implementation something you can ignore in this specific case anyway. That kind of technical-detail discussion is fine in the comments but would be a distraction to the answer.

@Karl Knechtel 2019-07-13 13:40:20

What on earth sort of numeric type actually demands this slow-scan treatment? I would expect the following logic to work: try to determine int(num); if an exception is raised return False; if the resulting integer doesn't compare equal to num return False; otherwise proceed to check whether that integer is in the range. Do you know of a non-contrived counterexample?

@Martijn Pieters 2019-07-13 14:12:58

@KarlKnechtel you can’t predict how other types behave, full stop. There is no guarantee that range was passed an actual numeric type. It is not enough to just convert the argument to int because why bother with a custom type then? It is up to the developer to make the call on whether or not to use int(custom_type) in range(....).

@abarnert 2015-05-06 16:01:17

The fundamental misunderstanding here is in thinking that range is a generator. It's not. In fact, it's not any kind of iterator.

You can tell this pretty easily:

>>> a = range(5)
>>> print(list(a))
[0, 1, 2, 3, 4]
>>> print(list(a))
[0, 1, 2, 3, 4]

If it were a generator, iterating it once would exhaust it:

>>> b = my_crappy_range(5)
>>> print(list(b))
[0, 1, 2, 3, 4]
>>> print(list(b))

What range actually is, is a sequence, just like a list. You can even test this:

>>> import
>>> isinstance(a,

This means it has to follow all the rules of being a sequence:

>>> a[3]         # indexable
>>> len(a)       # sized
>>> 3 in a       # membership
>>> reversed(a)  # reversible
<range_iterator at 0x101cd2360>
>>> a.index(3)   # implements 'index'
>>> a.count(3)   # implements 'count'

The difference between a range and a list is that a range is a lazy or dynamic sequence; it doesn't remember all of its values, it just remembers its start, stop, and step, and creates the values on demand on __getitem__.

(As a side note, if you print(iter(a)), you'll notice that range uses the same listiterator type as list. How does that work? A listiterator doesn't use anything special about list except for the fact that it provides a C implementation of __getitem__, so it works fine for range too.)

Now, there's nothing that says that Sequence.__contains__ has to be constant time—in fact, for obvious examples of sequences like list, it isn't. But there's nothing that says it can't be. And it's easier to implement range.__contains__ to just check it mathematically ((val - start) % step, but with some extra complexity to deal with negative steps) than to actually generate and test all the values, so why shouldn't it do it the better way?

But there doesn't seem to be anything in the language that guarantees this will happen. As Ashwini Chaudhari points out, if you give it a non-integral value, instead of converting to integer and doing the mathematical test, it will fall back to iterating all the values and comparing them one by one. And just because CPython 3.2+ and PyPy 3.x versions happen to contain this optimization, and it's an obvious good idea and easy to do, there's no reason that IronPython or NewKickAssPython 3.x couldn't leave it out. (And in fact CPython 3.0-3.1 didn't include it.)

If range actually were a generator, like my_crappy_range, then it wouldn't make sense to test __contains__ this way, or at least the way it makes sense wouldn't be obvious. If you'd already iterated the first 3 values, is 1 still in the generator? Should testing for 1 cause it to iterate and consume all the values up to 1 (or up to the first value >= 1)?

@Rick Teachey 2015-05-06 16:05:22

This is a pretty important thing to get straight. I suppose the differences between Python 2 and 3 may have lead to my confusion on this point. In any case, I should have realized since range is listed (along with list and tuple) as a sequence type.

@abarnert 2015-05-06 16:17:40

@RickTeachey: Actually, in 2.6+ (I think; maybe 2.5+), xrange is a sequence too. See 2.7 docs. In fact, it was always an almost-sequence.

@abarnert 2015-05-06 22:03:05

@RickTeachey: Actually, I was wrong; in 2.6-2.7 (and 3.0-3.1), it claims to be a sequence, but it's still just an almost-sequence. See my other answer.

@Smit Johnth 2016-06-18 19:40:23

It's not an iterator, it's a sequence (Iterable in terms of Java, IEnumerable of C#) - something with an .__iter__() method that will return an iterator. It in its turn can be used only once.

@Thomas Ahle 2016-08-29 10:57:43

seems strange that 's' in range(10**10) isn't optimized to immediately return False.

@ShadowRanger 2016-10-17 12:47:35

@ThomasAhle: Because range isn't checking types when it's not an integer, since it's always possible a type has a __eq__ that is compatible with int. Sure, str obviously won't work, but they didn't want to slow things down by explicitly checking all the types that can't be in there (and after all, a str subclass could override __eq__ and be contained in the range).

@jez 2018-11-19 20:54:34

For simplicity (as well as python 2 compatibility since some of these points generalize to python 2, even if the question itself doesn't) maybe cite collections.Sequence instead of ?

@Sven Marnach 2018-12-14 12:13:51

"And it's easier to implement range.__contains__ to just check it mathematically […] than to actually generate and test all the values" – I dispute this, since you could simply omit the implementation of __contains__ to get the latter behaviour, and that's certainly easier than any implementation you need to explicitly write. :)

@wim 2015-05-06 15:41:18

Use the source, Luke!

In CPython, range(...).__contains__ (a method wrapper) will eventually delegate to a simple calculation which checks if the value can possibly be in the range. The reason for the speed here is we're using mathematical reasoning about the bounds, rather than a direct iteration of the range object. To explain the logic used:

  1. Check that the number is between start and stop, and
  2. Check that the stride value doesn't "step over" our number.

For example, 994 is in range(4, 1000, 2) because:

  1. 4 <= 994 < 1000, and
  2. (994 - 4) % 2 == 0.

The full C code is included below, which is a bit more verbose because of memory management and reference counting details, but the basic idea is there:

static int
range_contains_long(rangeobject *r, PyObject *ob)
    int cmp1, cmp2, cmp3;
    PyObject *tmp1 = NULL;
    PyObject *tmp2 = NULL;
    PyObject *zero = NULL;
    int result = -1;

    zero = PyLong_FromLong(0);
    if (zero == NULL) /* MemoryError in int(0) */
        goto end;

    /* Check if the value can possibly be in the range. */

    cmp1 = PyObject_RichCompareBool(r->step, zero, Py_GT);
    if (cmp1 == -1)
        goto end;
    if (cmp1 == 1) { /* positive steps: start <= ob < stop */
        cmp2 = PyObject_RichCompareBool(r->start, ob, Py_LE);
        cmp3 = PyObject_RichCompareBool(ob, r->stop, Py_LT);
    else { /* negative steps: stop < ob <= start */
        cmp2 = PyObject_RichCompareBool(ob, r->start, Py_LE);
        cmp3 = PyObject_RichCompareBool(r->stop, ob, Py_LT);

    if (cmp2 == -1 || cmp3 == -1) /* TypeError */
        goto end;
    if (cmp2 == 0 || cmp3 == 0) { /* ob outside of range */
        result = 0;
        goto end;

    /* Check that the stride does not invalidate ob's membership. */
    tmp1 = PyNumber_Subtract(ob, r->start);
    if (tmp1 == NULL)
        goto end;
    tmp2 = PyNumber_Remainder(tmp1, r->step);
    if (tmp2 == NULL)
        goto end;
    /* result = ((int(ob) - start) % step) == 0 */
    result = PyObject_RichCompareBool(tmp2, zero, Py_EQ);
    return result;

static int
range_contains(rangeobject *r, PyObject *ob)
    if (PyLong_CheckExact(ob) || PyBool_Check(ob))
        return range_contains_long(r, ob);

    return (int)_PySequence_IterSearch((PyObject*)r, ob,

The "meat" of the idea is mentioned in the line:

/* result = ((int(ob) - start) % step) == 0 */ 

As a final note - look at the range_contains function at the bottom of the code snippet. If the exact type check fails then we don't use the clever algorithm described, instead falling back to a dumb iteration search of the range using _PySequence_IterSearch! You can check this behaviour in the interpreter (I'm using v3.5.0 here):

>>> x, r = 1000000000000000, range(1000000000000001)
>>> class MyInt(int):
...     pass
>>> x_ = MyInt(x)
>>> x in r  # calculates immediately :) 
>>> x_ in r  # iterates for ages.. :( 
^\Quit (core dumped)

@wizzwizz4 2018-04-04 08:34:18

@brian_o Would be better implemented with a try: finally: and a return instead of goto...

@ace 2018-08-10 07:20:55

@wizzwizz4 Since when does C support try/finally? (Except some vendor-specific extensions...)

@Stefan Pochmann 2015-05-06 16:04:37

The other answers explained it well already, but I'd like to offer another experiment illustrating the nature of range objects:

>>> r = range(5)
>>> for i in r:
        print(i, 2 in r, list(r))

0 True [0, 1, 2, 3, 4]
1 True [0, 1, 2, 3, 4]
2 True [0, 1, 2, 3, 4]
3 True [0, 1, 2, 3, 4]
4 True [0, 1, 2, 3, 4]

As you can see, a range object is an object that remembers its range and can be used many times (even while iterating over it), not just a one-time generator.

@abarnert 2015-05-06 21:09:22

I wouldn't play up the similarity to slice objects too much; it seems to confuse more people than it helps. (Once someone realizes that, they start wondering why there are two different types in the first place, and it takes a bit to explain it to them. And then some of them go and design other languages like Swift without ever figuring it out, and we get all kinds of annoying problems that they hack back and forth for 5 betas before they finally come up with a reasonable design…)

@Stefan Pochmann 2015-05-06 21:44:55

@abarnert Do you think I should delete that part? I never explicitly create slice objects anyway. I just added it because they're similar and at least to me, it feels more intuitive that slice objects are "static", as the data isn't coming from them but from the sequence they're used on.

@abarnert 2015-05-06 21:59:40

I think your intuition definitely helps make it clearer… but I don't know if you can work that into the answer. Maybe by explicitly calling the indices method? But at that point maybe it's getting too far off the main point?

@Stefan Pochmann 2015-05-06 22:22:26

@abarnert Nah, since I never explicitly used slice objects, I didn't even know the indices method and I don't feel like thinking about it now. I'll just remove that part from the answer.

@poke 2015-05-06 15:41:13

To add to Martijn’s answer, this is the relevant part of the source (in C, as the range object is written in native code):

static int
range_contains(rangeobject *r, PyObject *ob)
    if (PyLong_CheckExact(ob) || PyBool_Check(ob))
        return range_contains_long(r, ob);

    return (int)_PySequence_IterSearch((PyObject*)r, ob,

So for PyLong objects (which is int in Python 3), it will use the range_contains_long function to determine the result. And that function essentially checks if ob is in the specified range (although it looks a bit more complex in C).

If it’s not an int object, it falls back to iterating until it finds the value (or not).

The whole logic could be translated to pseudo-Python like this:

def range_contains (rangeObj, obj):
    if isinstance(obj, int):
        return range_contains_long(rangeObj, obj)

    # default logic by iterating
    return any(obj == x for x in rangeObj)

def range_contains_long (r, num):
    if r.step > 0:
        # positive step: r.start <= num < r.stop
        cmp2 = r.start <= num
        cmp3 = num < r.stop
        # negative step: r.start >= num > r.stop
        cmp2 = num <= r.start
        cmp3 = r.stop < num

    # outside of the range boundaries
    if not cmp2 or not cmp3:
        return False

    # num must be on a valid step inside the boundaries
    return (num - r.start) % r.step == 0

@abarnert 2015-05-11 19:23:19

@ChrisWesseling: I think this is different-enough information (and enough of it) that editing Martijn's answer wouldn't have been appropriate here. It's a judgment call, but people usually err on the side of not making drastic changes to other people's answers.

@abarnert 2015-05-06 21:42:53

If you're wondering why this optimization was added to range.__contains__, and why it wasn't added to xrange.__contains__ in 2.7:

First, as Ashwini Chaudhary discovered, issue 1766304 was opened explicitly to optimize [x]range.__contains__. A patch for this was accepted and checked in for 3.2, but not backported to 2.7 because "xrange has behaved like this for such a long time that I don't see what it buys us to commit the patch this late." (2.7 was nearly out at that point.)


Originally, xrange was a not-quite-sequence object. As the 3.1 docs say:

Range objects have very little behavior: they only support indexing, iteration, and the len function.

This wasn't quite true; an xrange object actually supported a few other things that come automatically with indexing and len,* including __contains__ (via linear search). But nobody thought it was worth making them full sequences at the time.

Then, as part of implementing the Abstract Base Classes PEP, it was important to figure out which builtin types should be marked as implementing which ABCs, and xrange/range claimed to implement collections.Sequence, even though it still only handled the same "very little behavior". Nobody noticed that problem until issue 9213. The patch for that issue not only added index and count to 3.2's range, it also re-worked the optimized __contains__ (which shares the same math with index, and is directly used by count).** This change went in for 3.2 as well, and was not backported to 2.x, because "it's a bugfix that adds new methods". (At this point, 2.7 was already past rc status.)

So, there were two chances to get this optimization backported to 2.7, but they were both rejected.

* In fact, you even get iteration for free with len and indexing, but in 2.3 xrange objects got a custom iterator. Which they then lost in 3.x, which uses the same listiterator type as list.

** The first version actually reimplemented it, and got the details wrong—e.g., it would give you MyIntSubclass(2) in range(5) == False. But Daniel Stutzbach's updated version of the patch restored most of the previous code, including the fallback to the generic, slow _PySequence_IterSearch that pre-3.2 range.__contains__ was implicitly using when the optimization doesn't apply.

@Ashwini Chaudhary 2015-05-06 22:11:47

From the comments here: improve xrange.__contains__, it looks like they didn't backport it to Python 2 just to leave an element of surprise for users and it was too late o_O. The count and index patch was added later on. File at that time:‌​c

@wim 2015-05-07 02:08:33

I have a sinister suspicion that some core python devs are partial to "tough love" for python 2.x because they want to encourage people to switch to the far-superior python3 :)

@abarnert 2015-05-07 03:07:56

@wim: A couple of them, definitely, but I don't think that's the case with Benjamin Peterson. He seems more in the camp of "people still on 2.7 are doing so because they want to make sure we don't fix what ain't broke, so change as little as possible". But the most common (or at least vocal) attitude at the moment seems to be "making it easy to port from 2.x to 3.x is priority #1, #2, and #3", possibly because multiple people paid to work on Python are also currently paid to work on Fedora and Ubuntu porting to 3.5.

@Robert Grant 2015-05-11 11:34:36

Also I bet it's a huge burden to have to add new features to old versions. Imagine if you went to Oracle and said, "Look, I'm on Java 1.4 and I deserve lambda expressions! Backport them for nothing."

@Rick Teachey 2015-05-12 13:12:54

@RobertGrant Well the problem with that explanation is, 2.7 isn't old. It was released around the same time as 3.3, I believe.

@Robert Grant 2015-05-12 13:41:31

@RickTeachey yeah it's just an example. If I said 1.7 it would still apply. It's a quantitative difference not qualitative. Basically the (unpaid) devs can't forever make cool new stuff in 3.x and backport it to 2.x for those who don't want to upgrade. It's a huge and ridiculous burden. Do you think there's still something wrong with my reasoning?

@Rick Teachey 2015-05-12 15:26:38

@RobertGrant no not at all, your reasoning is sound, and i'm not strenuously differing. i'm just saying it doesn't fully explain why there was no xrange optimization done for 2.7 (i understand that's not what you were saying). 2.7 was new, not an old version - people were excited about all the new stuff 2.7 would bring as with any other new release. and there was definitely the opportunity to do it during development. it just didn't make the cut.

@abarnert 2015-05-16 09:03:02

@RickTeachey: 2.7 was between 3.1 and 3.2, not around 3.3. And that means 2.7 was in rc when the last changes to 3.2 went in, which makes the bug comments easier to understand. Anyway, I think they made a few mistakes in retrospect (especially assuming people would migrate via 2to3 instead of via dual-version code with the help of libraries like six, which is why we got things like dict.viewkeys that nobody's ever going to use), and there were a few changes that just came too late in 3.2, but for the most part 2.7 was a pretty impressive "last 2.x ever" release.

Related Questions

Sponsored Content

23 Answered Questions

59 Answered Questions

[SOLVED] Calling an external command in Python

33 Answered Questions

[SOLVED] How to get the current time in Python

  • 2009-01-06 04:54:23
  • user46646
  • 2899128 View
  • 2498 Score
  • 33 Answer
  • Tags:   python datetime time

21 Answered Questions

[SOLVED] Does Python have a ternary conditional operator?

28 Answered Questions

[SOLVED] Finding the index of an item given a list containing it in Python

  • 2008-10-07 01:39:38
  • Eugene M
  • 3367423 View
  • 2782 Score
  • 28 Answer
  • Tags:   python list indexing

16 Answered Questions

[SOLVED] What are metaclasses in Python?

10 Answered Questions

[SOLVED] Does Python have a string 'contains' substring method?

77 Answered Questions

10 Answered Questions

2 Answered Questions

[SOLVED] Why is printing "B" dramatically slower than printing "#"?

Sponsored Content