By Roman


2013-10-15 15:00:12 8 Comments

I have a data frame df and I use several columns from it to groupby:

df['col1','col2','col3','col4'].groupby(['col1','col2']).mean()

In the above way I almost get the table (data frame) that I need. What is missing is an additional column that contains number of rows in each group. In other words, I have mean but I also would like to know how many number were used to get these means. For example in the first group there are 8 values and in the second one 10 and so on.

3 comments

@Pedro M Duarte 2015-09-26 19:34:51

Quick Answer:

The simplest way to get row counts per group is by calling .size(), which returns a Series:

df.groupby(['col1','col2']).size()


Usually you want this result as a DataFrame (instead of a Series) so you can do:

df.groupby(['col1', 'col2']).size().reset_index(name='counts')


If you want to find out how to calculate the row counts and other statistics for each group continue reading below.


Detailed example:

Consider the following example dataframe:

In [2]: df
Out[2]: 
  col1 col2  col3  col4  col5  col6
0    A    B  0.20 -0.61 -0.49  1.49
1    A    B -1.53 -1.01 -0.39  1.82
2    A    B -0.44  0.27  0.72  0.11
3    A    B  0.28 -1.32  0.38  0.18
4    C    D  0.12  0.59  0.81  0.66
5    C    D -0.13 -1.65 -1.64  0.50
6    C    D -1.42 -0.11 -0.18 -0.44
7    E    F -0.00  1.42 -0.26  1.17
8    E    F  0.91 -0.47  1.35 -0.34
9    G    H  1.48 -0.63 -1.14  0.17

First let's use .size() to get the row counts:

In [3]: df.groupby(['col1', 'col2']).size()
Out[3]: 
col1  col2
A     B       4
C     D       3
E     F       2
G     H       1
dtype: int64

Then let's use .size().reset_index(name='counts') to get the row counts:

In [4]: df.groupby(['col1', 'col2']).size().reset_index(name='counts')
Out[4]: 
  col1 col2  counts
0    A    B       4
1    C    D       3
2    E    F       2
3    G    H       1


Including results for more statistics

When you want to calculate statistics on grouped data, it usually looks like this:

In [5]: (df
   ...: .groupby(['col1', 'col2'])
   ...: .agg({
   ...:     'col3': ['mean', 'count'], 
   ...:     'col4': ['median', 'min', 'count']
   ...: }))
Out[5]: 
            col4                  col3      
          median   min count      mean count
col1 col2                                   
A    B    -0.810 -1.32     4 -0.372500     4
C    D    -0.110 -1.65     3 -0.476667     3
E    F     0.475 -0.47     2  0.455000     2
G    H    -0.630 -0.63     1  1.480000     1

The result above is a little annoying to deal with because of the nested column labels, and also because row counts are on a per column basis.

To gain more control over the output I usually split the statistics into individual aggregations that I then combine using join. It looks like this:

In [6]: gb = df.groupby(['col1', 'col2'])
   ...: counts = gb.size().to_frame(name='counts')
   ...: (counts
   ...:  .join(gb.agg({'col3': 'mean'}).rename(columns={'col3': 'col3_mean'}))
   ...:  .join(gb.agg({'col4': 'median'}).rename(columns={'col4': 'col4_median'}))
   ...:  .join(gb.agg({'col4': 'min'}).rename(columns={'col4': 'col4_min'}))
   ...:  .reset_index()
   ...: )
   ...: 
Out[6]: 
  col1 col2  counts  col3_mean  col4_median  col4_min
0    A    B       4  -0.372500       -0.810     -1.32
1    C    D       3  -0.476667       -0.110     -1.65
2    E    F       2   0.455000        0.475     -0.47
3    G    H       1   1.480000       -0.630     -0.63



Footnotes

The code used to generate the test data is shown below:

In [1]: import numpy as np
   ...: import pandas as pd 
   ...: 
   ...: keys = np.array([
   ...:         ['A', 'B'],
   ...:         ['A', 'B'],
   ...:         ['A', 'B'],
   ...:         ['A', 'B'],
   ...:         ['C', 'D'],
   ...:         ['C', 'D'],
   ...:         ['C', 'D'],
   ...:         ['E', 'F'],
   ...:         ['E', 'F'],
   ...:         ['G', 'H'] 
   ...:         ])
   ...: 
   ...: df = pd.DataFrame(
   ...:     np.hstack([keys,np.random.randn(10,4).round(2)]), 
   ...:     columns = ['col1', 'col2', 'col3', 'col4', 'col5', 'col6']
   ...: )
   ...: 
   ...: df[['col3', 'col4', 'col5', 'col6']] = \
   ...:     df[['col3', 'col4', 'col5', 'col6']].astype(float)
   ...: 


Disclaimer:

If some of the columns that you are aggregating have null values, then you really want to be looking at the group row counts as an independent aggregation for each column. Otherwise you may be misled as to how many records are actually being used to calculate things like the mean because pandas will drop NaN entries in the mean calculation without telling you about it.

@Quickbeam2k1 2016-08-17 11:26:44

Hey, I really like your solution, particularly the last, where you use method chaining. However, since it is often necessary, to apply different aggregation functions to different columns, one could also concat the resulting data frames using pd.concat. This maybe easier to read than subsqeuent chaining

@LancelotHolmes 2017-02-28 02:35:19

nice solution,but for In [5]: counts_df = pd.DataFrame(df.groupby('col1').size().rename('counts')) , maybe it's better to set the size() as a new column if you'd like to manipulate the dataframe for further analysis,which should be counts_df = pd.DataFrame(df.groupby('col1').size().reset_index(name='cou‌​nts')

@Nickolay 2018-05-28 08:17:22

Thanks for the "Including results for more statistics" bit! Since my next search was about flattening the resulting multiindex on columns, I'll link to the answer here: stackoverflow.com/a/50558529/1026

@Peter.k 2019-01-18 10:31:03

Great! Could you please give me a hint how to add isnull to this query to have it in one column as well? 'col4': ['median', 'min', 'count', 'isnull']

@Nimesh 2017-11-27 09:17:56

We can easily do it by using groupby and count. But, we should remember to use reset_index().

df[['col1','col2','col3','col4']].groupby(['col1','col2']).count().\
reset_index()

@Adrien Pacifico 2018-07-09 00:59:40

This solution works as long as there is no null value in the columns, otherwise it can be misleading (count will be lower than the actual number of observation by group).

@Boud 2013-10-15 15:49:28

On groupby object, the agg function can take a list to apply several aggregation methods at once. This should give you the result you need:

df[['col1', 'col2', 'col3', 'col4']].groupby(['col1', 'col2']).agg(['mean', 'count'])

@rysqui 2014-12-17 06:14:56

I think you need the column reference to be a list. Do you perhaps mean: df[['col1','col2','col3','col4']].groupby(['col1','col2']).a‌​gg(['mean', 'count'])

@Jaan 2015-07-22 06:58:09

This creates four count columns, but how to get only one? (The question asks for "an additional column" and that's what I would like too.)

@Pedro M Duarte 2015-09-26 19:43:37

Please see my answer if you want to get only one count column per group.

@Abhishek Bhatia 2017-10-02 21:28:32

What if I have a separate called Counts and instead of count the rows of the grouped type, I need to add along the column Counts.

Related Questions

Sponsored Content

13 Answered Questions

[SOLVED] Select first row in each GROUP BY group?

6 Answered Questions

[SOLVED] grouping rows in list in pandas groupby

6 Answered Questions

[SOLVED] How do I get the number of elements in a list in Python?

  • 2009-11-11 00:30:54
  • y2k
  • 3017321 View
  • 1746 Score
  • 6 Answer
  • Tags:   python list

17 Answered Questions

[SOLVED] How to iterate over rows in a DataFrame in Pandas?

14 Answered Questions

[SOLVED] Select rows from a DataFrame based on values in a column in pandas

22 Answered Questions

[SOLVED] Adding new column to existing DataFrame in Python pandas

13 Answered Questions

[SOLVED] "Large data" work flows using pandas

8 Answered Questions

[SOLVED] Converting a Pandas GroupBy object to DataFrame

2 Answered Questions

[SOLVED] Hierarchical grouping in pandas

  • 2016-03-01 21:43:52
  • Emre
  • 772 View
  • 0 Score
  • 2 Answer
  • Tags:   pandas group-by

Sponsored Content