By Salvador Dali


2016-08-24 20:45:24 8 Comments

I have the following dataframe:

df = pd.DataFrame([
    (1, 1, 'term1'),
    (1, 2, 'term2'),
    (1, 1, 'term1'),
    (1, 1, 'term2'),
    (2, 2, 'term3'),
    (2, 3, 'term1'),
    (2, 2, 'term1')
], columns=['id', 'group', 'term'])

I want to group it by id and group and calculate the number of each term for this id, group pair.

So in the end I am going to get something like this:

enter image description here

I was able to achieve what I want by looping over all the rows with df.iterrows() and creating a new dataframe, but this is clearly inefficient. (If it helps, I know the list of all terms beforehand and there are ~10 of them).

It looks like I have to group by and then count values, so I tried that with df.groupby(['id', 'group']).value_counts() which does not work because value_counts operates on the groupby series and not a dataframe.

Anyway I can achieve this without looping?

4 comments

@MaxU 2016-08-24 20:53:14

using pivot_table() method:

In [22]: df.pivot_table(index=['id','group'], columns='term', aggfunc='size', fill_value=0)
Out[22]:
term      term1  term2  term3
id group
1  1          2      1      0
   2          0      1      0
2  2          1      0      1
   3          1      0      0

Timing against 700K rows DF:

In [24]: df = pd.concat([df] * 10**5, ignore_index=True)

In [25]: df.shape
Out[25]: (700000, 3)

In [3]: %timeit df.groupby(['id', 'group', 'term'])['term'].size().unstack(fill_value=0)
1 loop, best of 3: 226 ms per loop

In [4]: %timeit df.pivot_table(index=['id','group'], columns='term', aggfunc='size', fill_value=0)
1 loop, best of 3: 236 ms per loop

In [5]: %timeit pd.crosstab([df.id, df.group], df.term)
1 loop, best of 3: 355 ms per loop

In [6]: %timeit df.groupby(['id','group','term'])['term'].size().unstack().fillna(0).astype(int)
1 loop, best of 3: 232 ms per loop

In [7]: %timeit df.groupby(['id', 'group', 'term']).size().unstack(fill_value=0)
1 loop, best of 3: 231 ms per loop

Timing against 7M rows DF:

In [9]: df = pd.concat([df] * 10, ignore_index=True)

In [10]: df.shape
Out[10]: (7000000, 3)

In [11]: %timeit df.groupby(['id', 'group', 'term'])['term'].size().unstack(fill_value=0)
1 loop, best of 3: 2.27 s per loop

In [12]: %timeit df.pivot_table(index=['id','group'], columns='term', aggfunc='size', fill_value=0)
1 loop, best of 3: 2.3 s per loop

In [13]: %timeit pd.crosstab([df.id, df.group], df.term)
1 loop, best of 3: 3.37 s per loop

In [14]: %timeit df.groupby(['id','group','term'])['term'].size().unstack().fillna(0).astype(int)
1 loop, best of 3: 2.28 s per loop

In [15]: %timeit df.groupby(['id', 'group', 'term']).size().unstack(fill_value=0)
1 loop, best of 3: 1.89 s per loop

@piRSquared 2016-08-24 21:07:07

I was just trying to update timings with larger sample :-)

@piRSquared 2016-08-24 21:08:16

wow! pivot seems just as efficient at larger scales. I'll have to remember that. I'd give you +1 but I already did a while ago.

@ayhan 2016-08-24 21:10:40

So size was the alias that we forgot here :)

@MaxU 2016-08-24 21:16:53

@ayhan, very strange - this time the solution with df.assign(ones = np.ones(len(df))).pivot_table(index=['id','group'], columns='term', values = 'ones', aggfunc=np.sum, fill_value=0) is bit slower - 1 loop, best of 3: 2.55 s per loop

@ayhan 2016-08-24 21:18:19

I think it is because you used len there, instead of 'size'. len is a Python function but the functions we pass as strings are aliases to optimized C functions.

@MaxU 2016-08-24 21:21:15

@ayhan, i have the same timing if i use df.shape[0] instead of len(df)

@jezrael 2016-08-24 22:03:09

@MaxU - can you add to timings df.groupby(['id', 'group', 'term'])['term'].size().unstack(fill_value=0) ? Thanks.

@MaxU 2016-08-25 07:17:34

@jezrael, i've added a timing for this solution

@jezrael 2016-08-25 07:19:59

@MaxU - very interesting, Iget same timings with fastest method, but you get different. Thank you for timings.

@piRSquared 2016-08-24 20:57:41

I use groupby and size

df.groupby(['id', 'group', 'term']).size().unstack(fill_value=0)

enter image description here


Timing

enter image description here

1,000,000 rows

df = pd.DataFrame(dict(id=np.random.choice(100, 1000000),
                       group=np.random.choice(20, 1000000),
                       term=np.random.choice(10, 1000000)))

enter image description here

@piRSquared 2016-08-24 21:02:30

@jezrael thx, size is quicker too. crosstab is oddly inefficient

@jezrael 2016-08-24 21:09:39

And I am surprised that crosstab is so lazy ;)

@MaxU 2016-08-24 21:12:30

@jezrael, crosstab uses pivot_table internally... ;)

@jezrael 2016-08-24 22:03:40

@piRSquared - can you add to timings df.groupby(['id', 'group', 'term'])['term'].size().unstack(fill_value=0) ? It seems faster for me. Thanks.

@jezrael 2016-08-24 22:08:22

@piRSquared - I try it in larger df and a bit faster (0.2ms, maybe it is same ;))

@piRSquared 2016-08-24 22:14:30

@jezrael all are same except for crosstab

@jezrael 2016-08-24 20:47:25

You can use crosstab:

print (pd.crosstab([df.id, df.group], df.term))
term      term1  term2  term3
id group                     
1  1          2      1      0
   2          0      1      0
2  2          1      0      1
   3          1      0      0

Another solution with groupby with aggregating size, reshaping by unstack:

df.groupby(['id', 'group', 'term'])['term'].size().unstack(fill_value=0)

term      term1  term2  term3
id group                     
1  1          2      1      0
   2          0      1      0
2  2          1      0      1
   3          1      0      0

Timings:

df = pd.concat([df]*10000).reset_index(drop=True)

In [48]: %timeit (df.groupby(['id', 'group', 'term']).size().unstack(fill_value=0))
100 loops, best of 3: 12.4 ms per loop

In [49]: %timeit (df.groupby(['id', 'group', 'term'])['term'].size().unstack(fill_value=0))
100 loops, best of 3: 12.2 ms per loop

@Salvador Dali 2016-08-24 20:53:23

wow wow wow, you are amazing. And it took you only 3 minutes (the same time it took me to write a loop, and less time then it took me to write this question). I would really appreciate if you can write some explanation of why this works, but most probably I will be able to understand it by myself in a few minutes.

@jezrael 2016-08-24 20:57:06

In your case crosstab is better as pivot_table, because default aggregating function is len (it is same as size) and I think it is also faster solution. Crosstab use first argument as index and second of columns. Give me a time, I try add timings.

@jezrael 2016-08-24 20:58:39

But I think better it is explain in docs.

@A.Kot 2016-08-24 21:46:53

Instead of remembering lengthy solutions, how about the one that pandas has built in for you:

df.groupby(['id', 'group', 'term']).count()

Related Questions

Sponsored Content

27 Answered Questions

[SOLVED] Renaming columns in pandas

18 Answered Questions

[SOLVED] Selecting multiple columns in a pandas dataframe

27 Answered Questions

[SOLVED] Add one row to pandas DataFrame

22 Answered Questions

[SOLVED] How to iterate over rows in a DataFrame in Pandas

15 Answered Questions

[SOLVED] Delete column from pandas DataFrame

24 Answered Questions

[SOLVED] Adding new column to existing DataFrame in Python pandas

14 Answered Questions

[SOLVED] How do I get the row count of a pandas DataFrame?

19 Answered Questions

[SOLVED] Get list from pandas DataFrame column headers

9 Answered Questions

10 Answered Questions

[SOLVED] How to select rows from a DataFrame based on column values?

Sponsored Content