By vishes_shell


2016-10-13 10:25:25 8 Comments

So i was playing with list objects and found little strange thing that if list is created with list() it uses more memory, than list comprehension? I'm using Python 3.5.2

In [1]: import sys
In [2]: a = list(range(100))
In [3]: sys.getsizeof(a)
Out[3]: 1008
In [4]: b = [i for i in range(100)]
In [5]: sys.getsizeof(b)
Out[5]: 912
In [6]: type(a) == type(b)
Out[6]: True
In [7]: a == b
Out[7]: True
In [8]: sys.getsizeof(list(b))
Out[8]: 1008

From the docs:

Lists may be constructed in several ways:

  • Using a pair of square brackets to denote the empty list: []
  • Using square brackets, separating items with commas: [a], [a, b, c]
  • Using a list comprehension: [x for x in iterable]
  • Using the type constructor: list() or list(iterable)

But it seems that using list() it uses more memory.

And as much list is bigger, the gap increases.

Difference in memory

Why this happens?

UPDATE #1

Test with Python 3.6.0b2:

Python 3.6.0b2 (default, Oct 11 2016, 11:52:53) 
[GCC 5.4.0 20160609] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> sys.getsizeof(list(range(100)))
1008
>>> sys.getsizeof([i for i in range(100)])
912

UPDATE #2

Test with Python 2.7.12:

Python 2.7.12 (default, Jul  1 2016, 15:12:24) 
[GCC 5.4.0 20160609] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> sys.getsizeof(list(xrange(100)))
1016
>>> sys.getsizeof([i for i in xrange(100)])
920

2 comments

@Reut Sharabani 2016-10-13 10:40:13

I think you're seeing over-allocation patterns this is a sample from the source:

/* This over-allocates proportional to the list size, making room
 * for additional growth.  The over-allocation is mild, but is
 * enough to give linear-time amortized behavior over a long
 * sequence of appends() in the presence of a poorly-performing
 * system realloc().
 * The growth pattern is:  0, 4, 8, 16, 25, 35, 46, 58, 72, 88, ...
 */

new_allocated = (newsize >> 3) + (newsize < 9 ? 3 : 6);

Printing the sizes of list comprehensions of lengths 0-88 you can see the pattern matches:

# create comprehensions for sizes 0-88
comprehensions = [sys.getsizeof([1 for _ in range(l)]) for l in range(90)]

# only take those that resulted in growth compared to previous length
steps = zip(comprehensions, comprehensions[1:])
growths = [x for x in list(enumerate(steps)) if x[1][0] != x[1][1]]

# print the results:
for growth in growths:
    print(growth)

Results (format is (list length, (old total size, new total size))):

(0, (64, 96)) 
(4, (96, 128))
(8, (128, 192))
(16, (192, 264))
(25, (264, 344))
(35, (344, 432))
(46, (432, 528))
(58, (528, 640))
(72, (640, 768))
(88, (768, 912))

The over-allocation is done for performance reasons allowing lists to grow without allocating more memory with every growth (better amortized performance).

A probable reason for the difference with using list comprehension, is that list comprehension can not deterministically calculate the size of the generated list, but list() can. This means comprehensions will continuously grow the list as it fills it using over-allocation until finally filling it.

It is possible that is will not grow the over-allocation buffer with unused allocated nodes once its done (in fact, in most cases it wont, that would defeat the over-allocation purpose).

list(), however, can add some buffer no matter the list size since it knows the final list size in advance.


Another backing evidence, also from the source, is that we see list comprehensions invoking LIST_APPEND, which indicates usage of list.resize, which in turn indicates consuming the pre-allocation buffer without knowing how much of it will be filled. This is consistent with the behavior you're seeing.


To conclude, list() will pre-allocate more nodes as a function of the list size

>>> sys.getsizeof(list([1,2,3]))
60
>>> sys.getsizeof(list([1,2,3,4]))
64

List comprehension does not know the list size so it uses append operations as it grows, depleting the pre-allocation buffer:

# one item before filling pre-allocation buffer completely
>>> sys.getsizeof([i for i in [1,2,3]]) 
52
# fills pre-allocation buffer completely
# note that size did not change, we still have buffered unused nodes
>>> sys.getsizeof([i for i in [1,2,3,4]]) 
52
# grows pre-allocation buffer
>>> sys.getsizeof([i for i in [1,2,3,4,5]])
68

@cdarke 2016-10-13 10:41:41

But why would the over-allocaton happen with one but not the other?

@Reut Sharabani 2016-10-13 10:42:54

This specifically is from list.resize. I'm not an expert in navigating throught he source, but if one calls resize and the other doesn't - it could explain the difference.

@tavo 2016-10-13 11:02:12

Python 3.5.2 here. Try printing sizes of lists from 0 to 35 in loop. For list I see 64, 96, 104, 112, 120, 128, 136, 144, 160, 192, 200, 208, 216, 224, 232, 240, 256, 264, 272, 280, 288, 296, 304, 312, 328, 336, 344, 352, 360, 368, 376, 384, 400, 408, 416 and for comprehension 64, 96, 96, 96, 96, 128, 128, 128, 128, 192, 192, 192, 192, 192, 192, 192, 192, 264, 264, 264, 264, 264, 264, 264, 264, 264, 344, 344, 344, 344, 344, 344, 344, 344, 344. I would except that comprehension being the one who seem to preallocate memory to be the algorithm that uses more RAM for certain sizes.

@Reut Sharabani 2016-10-13 11:05:14

I'd expect the same. I can look further into it soon. Good comments.

@Reut Sharabani 2016-10-13 11:08:47

actually list() deterministically determines list size, which list comprehension can't do. This suggests list comprehension doesn't always "trigger" the "last" growth of the list. Could make sense.

@tavo 2016-10-13 11:33:23

I think that since range does support len list allocates len + some preallocation amount of spaces. Interestingly enought, when I use generator object as argument for list (generators does not support len) I get different size pattern than with list comprehension: 64, 96, 104, 112, 120, 128, 160, 160, 160, 160, 160, 160, 160, 224, 224, 224, 224, 224, 224, 224, 224, 296, 296, 296, 296, 296, 296, 296, 296, 296, 376, 376, 376, 376, 376

@vishes_shell 2016-10-13 11:37:10

Thanks everyone for helping me to understand that awesome Python.

I don't want to make question that massive(that why i'm posting answer), just want to show and share my thoughts.

As @ReutSharabani noted correctly: "list() deterministically determines list size". You can see it from that graph.

graph of sizes

When you append or using list comprehension you always have some sort of boundaries that extends when you reach some point. And with list() you have almost the same boundaries, but they are floating.

UPDATE

So thanks to @ReutSharabani, @tavo, @SvenFestersen

To sum up: list() preallocates memory depend on list size, list comprehension cannot do that(it requests more memory when it needed, like .append()). That's why list() store more memory.

One more graph, that show list() preallocate memory. So green line shows list(range(830)) appending element by element and for a while memory not changing.

list() preallocates memory

UPDATE 2

As @Barmar noted in comments below, list() must me faster than list comprehension, so i ran timeit() with number=1000 for length of list from 4**0 to 4**10 and the results are

time measurements

@tavo 2016-10-13 11:43:56

The answer why red line is above blue is, that when list constructor can determine size of new list from it's argument it will still preallocate the same amount of space as it would if the last element just got there and there was not enough space for it.At least that's what makes sense to me.

@vishes_shell 2016-10-13 11:46:23

@tavo it's seems the same to me, after some moment i want to show it in the graph.

@Barmar 2016-10-18 18:46:06

So while list comprehensions use less memory, they're probably significantly slower because of all the resizes that occur. These will often have to copy the list backbone to a new memory area.

@vishes_shell 2016-10-18 18:50:11

@Barmar actually i can run some time measurements with range object(that could be fun).

@Barmar 2016-10-18 19:00:52

And it will make your graphs even prettier. :)

@Barmar 2016-10-18 20:12:16

I was hoping you'd overlay them on the same graph. However, what this shows is that you don't really notice much performance impact until the list gets very large.

@vishes_shell 2016-10-18 20:17:09

@Barmar overlaying won't work because Y axis is different in measurements(time and memory) hence can't really overlay on each other. And yeah, list comprehension are fast.

Related Questions

Sponsored Content

43 Answered Questions

[SOLVED] How to make a flat list out of list of lists

22 Answered Questions

[SOLVED] How do I list all files of a directory?

  • 2010-07-08 19:31:22
  • duhhunjonn
  • 3454759 View
  • 3474 Score
  • 22 Answer
  • Tags:   python directory

28 Answered Questions

[SOLVED] Finding the index of an item given a list containing it in Python

  • 2008-10-07 01:39:38
  • Eugene M
  • 3361668 View
  • 2776 Score
  • 28 Answer
  • Tags:   python list indexing

30 Answered Questions

[SOLVED] How do I check if a list is empty?

  • 2008-09-10 06:20:11
  • Ray Vega
  • 2359228 View
  • 3235 Score
  • 30 Answer
  • Tags:   python list

60 Answered Questions

[SOLVED] How do you split a list into evenly sized chunks?

12 Answered Questions

[SOLVED] Create a dictionary with list comprehension

20 Answered Questions

7 Answered Questions

[SOLVED] How do I get the number of elements in a list?

  • 2009-11-11 00:30:54
  • y2k
  • 3105522 View
  • 1804 Score
  • 7 Answer
  • Tags:   python list

25 Answered Questions

[SOLVED] How do I concatenate two lists in Python?

14 Answered Questions

[SOLVED] How to clone or copy a list?

Sponsored Content