By robmerica


2009-02-27 19:19:31 8 Comments

I'm looking for some kind of formula or algorithm to determine the brightness of a color given the RGB values. I know it can't be as simple as adding the RGB values together and having higher sums be brighter, but I'm kind of at a loss as to where to start.

20 comments

@Synchro 2013-06-27 12:56:06

Rather than getting lost amongst the random selection of formulae mentioned here, I suggest you go for the formula recommended by W3C standards.

Here's a straightforward but exact PHP implementation of the WCAG 2.0 SC 1.4.3 relative luminance and contrast ratio formulae. It produces values that are appropriate for evaluating the ratios required for WCAG compliance, as on this page, and as such is suitable and appropriate for any web app. This is trivial to port to other languages.

/**
 * Calculate relative luminance in sRGB colour space for use in WCAG 2.0 compliance
 * @link http://www.w3.org/TR/WCAG20/#relativeluminancedef
 * @param string $col A 3 or 6-digit hex colour string
 * @return float
 * @author Marcus Bointon <[email protected]>
 */
function relativeluminance($col) {
    //Remove any leading #
    $col = trim($col, '#');
    //Convert 3-digit to 6-digit
    if (strlen($col) == 3) {
        $col = $col[0] . $col[0] . $col[1] . $col[1] . $col[2] . $col[2];
    }
    //Convert hex to 0-1 scale
    $components = array(
        'r' => hexdec(substr($col, 0, 2)) / 255,
        'g' => hexdec(substr($col, 2, 2)) / 255,
        'b' => hexdec(substr($col, 4, 2)) / 255
    );
    //Correct for sRGB
    foreach($components as $c => $v) {
        if ($v <= 0.04045) {
            $components[$c] = $v / 12.92;
        } else {
            $components[$c] = pow((($v + 0.055) / 1.055), 2.4);
        }
    }
    //Calculate relative luminance using ITU-R BT. 709 coefficients
    return ($components['r'] * 0.2126) + ($components['g'] * 0.7152) + ($components['b'] * 0.0722);
}

/**
 * Calculate contrast ratio acording to WCAG 2.0 formula
 * Will return a value between 1 (no contrast) and 21 (max contrast)
 * @link http://www.w3.org/TR/WCAG20/#contrast-ratiodef
 * @param string $c1 A 3 or 6-digit hex colour string
 * @param string $c2 A 3 or 6-digit hex colour string
 * @return float
 * @author Marcus Bointon <[email protected]>
 */
function contrastratio($c1, $c2) {
    $y1 = relativeluminance($c1);
    $y2 = relativeluminance($c2);
    //Arrange so $y1 is lightest
    if ($y1 < $y2) {
        $y3 = $y1;
        $y1 = $y2;
        $y2 = $y3;
    }
    return ($y1 + 0.05) / ($y2 + 0.05);
}

@user151496 2014-03-15 17:15:46

why would you prefer w3c definition? personally i have implemented both CCIR 601 and the w3c recommended one and i was much more satisfied with the CCIR 601 results

@Synchro 2014-03-16 11:42:58

Because, as I said, it's recommended by both the W3C and WCAG?

@zenw0lf 2018-09-12 23:32:15

@JiveDadson As I see it, it is applied right there where it says //Correct for sRGB. At least it is almost the same operation you have defined as inv_gam_sRGB. So I think this is correct.

@Myndex 2019-04-12 19:35:34

The W3C formula is incorrect on a number of levels. It is not taking human perception into account, they are using "simple" contrast using luminance which is linear and not at all perceptually uniform. Among other things, it appears they based it on some standards as old as 1988 (!!!) which are not relevant today (those standards were based on monochrome monitors such as green/black, and referred to the total contrast from on to off, not considering greyscale nor colors).

@Synchro 2019-04-12 20:56:13

That’s complete rubbish. Luma is specifically perceptual - that’s why it has different coefficients for red, green, and blue. Age has nothing to do with it - the excellent CIE Lab perceptual colour space dates from 1976. The W3C space isn’t as good, however it is a good practical approximation that is easy to calculate. If you have something constructive to offer, post that instead of empty criticism.

@Myndex 2019-04-13 00:28:22

@Syncro no, luma is a GAMMA ENCODED (Y´) part of some video encodings (such as NTSC's YIQ). Luminance, i.e. Y as in CIEXYZ is LINEAR, and not at all perceptual. the W3C are using linear luminance, and simple contrast, which does not properly define contrast in the mid range (it is way off). Writing an article on this right now, I'll post the link when complete. Yes, CIELAB is excellent, but W3C ARE NOT USING IT. The outdated doc I am referring to is ANSI-HFES-100-1988, and not appropriate for on-screen color contrasts.

@Myndex 2019-06-19 23:21:57

Just to add/update: we are currently researching replacement algorithms that better model perceptual contrast (discussion in Github Issue 695). However, as a separate issue FYI the threshold for sRGB is 0.04045, and not 0.03928 which was referenced from an obsolete early sRGB draft. The authoritative IEC std uses 0.04045 and a pull request is forthcoming to correct this error in the WCAG. (ref: IEC 61966-2-1:1999) This is in Github issue 360, though to mention, in 8bit there is no actual difference — near end of thread 360 I have charts of errors including 0.04045/0.03928 in 8bit.

@EddingtonsMonkey 2016-06-03 21:53:07

Here's a bit of C code that should properly calculate perceived luminance.

// reverses the rgb gamma
#define inverseGamma(t) (((t) <= 0.0404482362771076) ? ((t)/12.92) : pow(((t) + 0.055)/1.055, 2.4))

//CIE L*a*b* f function (used to convert XYZ to L*a*b*)  http://en.wikipedia.org/wiki/Lab_color_space
#define LABF(t) ((t >= 8.85645167903563082e-3) ? powf(t,0.333333333333333) : (841.0/108.0)*(t) + (4.0/29.0))


float
rgbToCIEL(PIXEL p)
{
   float y;
   float r=p.r/255.0;
   float g=p.g/255.0;
   float b=p.b/255.0;

   r=inverseGamma(r);
   g=inverseGamma(g);
   b=inverseGamma(b);

   //Observer = 2°, Illuminant = D65 
   y = 0.2125862307855955516*r + 0.7151703037034108499*g + 0.07220049864333622685*b;

   // At this point we've done RGBtoXYZ now do XYZ to Lab

   // y /= WHITEPOINT_Y; The white point for y in D65 is 1.0

    y = LABF(y);

   /* This is the "normal conversion which produces values scaled to 100
    Lab.L = 116.0*y - 16.0;
   */
   return(1.16*y - 0.16); // return values for 0.0 >=L <=1.0
}

@catamphetamine 2020-01-05 17:26:44

I was solving a similar task today in javascript. I've settled on this getPerceivedLightness(rgb) function for a HEX RGB color. It deals with Helmholtz-Kohlrausch effect via Fairchild and Perrotta formula for luminance correction.

/**
 * Converts RGB color to CIE 1931 XYZ color space.
 * https://www.image-engineering.de/library/technotes/958-how-to-convert-between-srgb-and-ciexyz
 * @param  {string} hex
 * @return {number[]}
 */
export function rgbToXyz(hex) {
    const [r, g, b] = hexToRgb(hex).map(_ => _ / 255).map(sRGBtoLinearRGB)
    const X =  0.4124 * r + 0.3576 * g + 0.1805 * b
    const Y =  0.2126 * r + 0.7152 * g + 0.0722 * b
    const Z =  0.0193 * r + 0.1192 * g + 0.9505 * b
    // For some reason, X, Y and Z are multiplied by 100.
    return [X, Y, Z].map(_ => _ * 100)
}

/**
 * Undoes gamma-correction from an RGB-encoded color.
 * https://en.wikipedia.org/wiki/SRGB#Specification_of_the_transformation
 * https://stackoverflow.com/questions/596216/formula-to-determine-brightness-of-rgb-color
 * @param  {number}
 * @return {number}
 */
function sRGBtoLinearRGB(color) {
    // Send this function a decimal sRGB gamma encoded color value
    // between 0.0 and 1.0, and it returns a linearized value.
    if (color <= 0.04045) {
        return color / 12.92
    } else {
        return Math.pow((color + 0.055) / 1.055, 2.4)
    }
}

/**
 * Converts hex color to RGB.
 * https://stackoverflow.com/questions/5623838/rgb-to-hex-and-hex-to-rgb
 * @param  {string} hex
 * @return {number[]} [rgb]
 */
function hexToRgb(hex) {
    const match = /^#?([a-f\d]{2})([a-f\d]{2})([a-f\d]{2})$/i.exec(hex)
    if (match) {
        match.shift()
        return match.map(_ => parseInt(_, 16))
    }
}

/**
 * Converts CIE 1931 XYZ colors to CIE L*a*b*.
 * The conversion formula comes from <http://www.easyrgb.com/en/math.php>.
 * https://github.com/cangoektas/xyz-to-lab/blob/master/src/index.js
 * @param   {number[]} color The CIE 1931 XYZ color to convert which refers to
 *                           the D65/2° standard illuminant.
 * @returns {number[]}       The color in the CIE L*a*b* color space.
 */
// X, Y, Z of a "D65" light source.
// "D65" is a standard 6500K Daylight light source.
// https://en.wikipedia.org/wiki/Illuminant_D65
const D65 = [95.047, 100, 108.883]
export function xyzToLab([x, y, z]) {
  [x, y, z] = [x, y, z].map((v, i) => {
    v = v / D65[i]
    return v > 0.008856 ? Math.pow(v, 1 / 3) : v * 7.787 + 16 / 116
  })
  const l = 116 * y - 16
  const a = 500 * (x - y)
  const b = 200 * (y - z)
  return [l, a, b]
}

/**
 * Converts Lab color space to Luminance-Chroma-Hue color space.
 * http://www.brucelindbloom.com/index.html?Eqn_Lab_to_LCH.html
 * @param  {number[]}
 * @return {number[]}
 */
export function labToLch([l, a, b]) {
    const c = Math.sqrt(a * a + b * b)
    const h = abToHue(a, b)
    return [l, c, h]
}

/**
 * Converts a and b of Lab color space to Hue of LCH color space.
 * https://stackoverflow.com/questions/53733379/conversion-of-cielab-to-cielchab-not-yielding-correct-result
 * @param  {number} a
 * @param  {number} b
 * @return {number}
 */
function abToHue(a, b) {
    if (a >= 0 && b === 0) {
        return 0
    }
    if (a < 0 && b === 0) {
        return 180
    }
    if (a === 0 && b > 0) {
        return 90
    }
    if (a === 0 && b < 0) {
        return 270
    }
    let xBias
    if (a > 0 && b > 0) {
        xBias = 0
    } else if (a < 0) {
        xBias = 180
    } else if (a > 0 && b < 0) {
        xBias = 360
    }
    return radiansToDegrees(Math.atan(b / a)) + xBias
}

function radiansToDegrees(radians) {
    return radians * (180 / Math.PI)
}

function degreesToRadians(degrees) {
    return degrees * Math.PI / 180
}

/**
 * Saturated colors appear brighter to human eye.
 * That's called Helmholtz-Kohlrausch effect.
 * Fairchild and Pirrotta came up with a formula to
 * calculate a correction for that effect.
 * "Color Quality of Semiconductor and Conventional Light Sources":
 * https://books.google.ru/books?id=ptDJDQAAQBAJ&pg=PA45&lpg=PA45&dq=fairchild+pirrotta+correction&source=bl&ots=7gXR2MGJs7&sig=ACfU3U3uIHo0ZUdZB_Cz9F9NldKzBix0oQ&hl=ru&sa=X&ved=2ahUKEwi47LGivOvmAhUHEpoKHU_ICkIQ6AEwAXoECAkQAQ#v=onepage&q=fairchild%20pirrotta%20correction&f=false
 * @return {number}
 */
function getLightnessUsingFairchildPirrottaCorrection([l, c, h]) {
    const l_ = 2.5 - 0.025 * l
    const g = 0.116 * Math.abs(Math.sin(degreesToRadians((h - 90) / 2))) + 0.085
    return l + l_ * g * c
}

export function getPerceivedLightness(hex) {
    return getLightnessUsingFairchildPirrottaCorrection(labToLch(xyzToLab(rgbToXyz(hex))))
}

@Myndex 2019-06-20 03:16:09

The "Accepted" Answer is Incorrect and Incomplete

The only answers that are accurate are the @jive-dadson and @EddingtonsMonkey answers, and in support @nils-pipenbrinck. The other answers (including the accepted) are linking to or citing sources that are either wrong, irrelevant, obsolete, or broken.

Briefly:

  • sRGB must be LINEARIZED before applying the coefficients.
  • Luminance (L or Y) is linear as is light.
  • Perceived lightness (L*) is nonlinear as is human perception.
  • HSV and HSL are not even remotely accurate in terms of perception.
  • The IEC standard for sRGB specifies a threshold of 0.04045 it is NOT 0.03928 (that was from an obsolete early draft).
  • The be useful (i.e. relative to perception), Euclidian distances require a perceptually uniform Cartesian vector space such as CIELAB. sRGB is not one.

What follows is a correct and complete answer:

Because this thread appears highly in search engines, I am adding this answer to clarify the various misconceptions on the subject.

Brightness is a perceptual attribute, it does not have a direct measure.

Perceived lightness is measured by some vision models such as CIELAB, here L* (Lstar) is a measure of perceptual lightness, and is non-linear to approximate the human vision non-linear response curve.

Luminance is a linear measure of light, spectrally weighted for normal vision but not adjusted for non-linear perception of lightness.

Luma ( prime) is a gamma encoded, weighted signal used in some video encodings. It is not to be confused with linear luminance.

Gamma or transfer curve (TRC) is a curve that is often similar to the perceptual curve, and is commonly applied to image data for storage or broadcast to reduce perceived noise and/or improve data utilization (and related reasons).

To determine perceived lightness, first convert gamma encoded R´G´B´ image values to linear luminance (L or Y ) and then to non-linear perceived lightness (L*)


TO FIND LUMINANCE:

...Because apparently it was lost somewhere...

Step One:

Convert all sRGB 8 bit integer values to decimal 0.0-1.0

  vR = sR / 255;
  vG = sG / 255;
  vB = sB / 255;

Step Two:

Convert a gamma encoded RGB to a linear value. sRGB (computer standard) for instance requires a power curve of approximately V^2.2, though the "accurate" transform is:

sRGB to Linear

Where V´ is the gamma-encoded R, G, or B channel of sRGB.
Pseudocode:

function sRGBtoLin(colorChannel) {
        // Send this function a decimal sRGB gamma encoded color value
        // between 0.0 and 1.0, and it returns a linearized value.

    if ( colorChannel <= 0.04045 ) {
            return colorChannel / 12.92;
        } else {
            return pow((( colorChannel + 0.055)/1.055),2.4));
        }
    }

Step Three:

To find Luminance (Y) apply the standard coefficients for sRGB:

Apply coefficients Y = R * 0.2126 + G * 0.7152 + B *  0.0722

Pseudocode using above functions:

Y = (0.2126 * sRGBtoLin(vR) + 0.7152 * sRGBtoLin(vG) + 0.0722 * sRGBtoLin(vB))

TO FIND PERCEIVED LIGHTNESS:

Step Four:

Take luminance Y from above, and transform to L*

L* from Y equation
Pseudocode:

function YtoLstar(Y) {
        // Send this function a luminance value between 0.0 and 1.0,
        // and it returns L* which is "perceptual lightness"

    if ( Y <= (216/24389) {       // The CIE standard states 0.008856 but 216/24389 is the intent for 0.008856451679036
            return Y * (24389/27);  // The CIE standard states 903.3, but 24389/27 is the intent, making 903.296296296296296
        } else {
            return pow(Y,(1/3)) * 116 - 16;
        }
    }

L* is a value from 0 (black) to 100 (white) where 50 is the perceptual "middle grey". L* = 50 is the equivalent of Y = 18.4, or in other words an 18% grey card, representing the middle of a photographic exposure (Ansel Adams zone V).

References:

IEC 61966-2-1:1999 Standard
Wikipedia sRGB
Wikipedia CIELAB
Wikipedia CIEXYZ
Charles Poynton's Gamma FAQ

@Myndex 2019-07-20 13:15:53

@Rotem thank you — I saw some odd and incomplete statements and felt it would be helpful to nail it down, particularly as this thread still ranks highly on search engines.

@Rotem 2019-07-20 20:56:28

I created a demonstration comparing BT.601 Luma and CIE 1976 L* Perceptual Gray, using few MATLAB commands: Luma=rgb2gray(RGB);LAB=rgb2lab(RGB);LAB(:,:,2:3)=0;Perceptua‌​lGray=lab2rgb(LAB);

@sjahan 2020-01-05 08:14:09

@Myndex I used your formulas to get to L*, but I still get some strange results, whatever the formula I use... With yours, L* of #d05858 is darker than L* of #c51c2a... Is there any way to get this right? Why does no formula work as expected? :(

@catamphetamine 2020-01-05 17:28:59

Hi. Your answer seems to be the most legit. However, I was solving a similar task today and found out about Helmholtz-Kohlrausch effect that makes red color more bright than gray color with the same Luminance. See my answer below for the Helmholtz-Kohlrausch effect correction and the true "perceived brightness" of a color.

@Myndex 2020-01-06 08:25:57

@sjahan L* of D05858 is 53.1, and L* of C51C2A is 42.6 — what results are you getting?? #777777 should be L* 50. Can you show me your code?

@Myndex 2020-01-06 08:31:12

@asdfasdfads Yes, L*a*b* does not take into account a number of psychophysical attributes. Helmholtz-Kohlrausch effect is one, but there are many others. CIELAB is not a "full" image assessment model by any means. In my post I was trying to cover the basic concepts as completely as possible without venturing into the very deep minutiae. The Hunt model, Fairchild's models, and others do a more complete job, but are also substantially more complex.

@sjahan 2020-01-06 09:55:54

@Myndex, nevermind, my implementation was fatigue-based and my poor results came from that :( Thank you very much for your help and your post which is of a great value!

@Adrian McCarthy 2020-06-26 15:18:03

Note that the incorrect threshold from the old sRGB standard is propagated in other publications, including the official description of the minimum text contrast test from Web Content Accessibility Guidelines 2.1. Since accessibility may have motivated the question, that may be useful information. w3.org/WAI/WCAG21/Techniques/general/G17.html#tests

@Myndex 2020-06-27 03:09:09

@AdrianMcCarthy Hi Adrian, thanks, yes I know—I'm working on the new WCAG (Silver i.e. 3.0) right now, and that has been raised as an issue for some time (see my WCAG GitHub issue #695). It's not related to accessibility, just a mistake.... As it happens, this incorrect threshold is not "particularly bad" in 8 bit as it is being used there, but is nevertheless wrong. There hasn't been concerted effort to fix that issue as the new methodology in development is predicting contrast in a completely different way.

@Petr Hurtak 2014-06-13 20:20:52

I have made comparison of the three algorithms in the accepted answer. I generated colors in cycle where only about every 400th color was used. Each color is represented by 2x2 pixels, colors are sorted from darkest to lightest (left to right, top to bottom).

1st picture - Luminance (relative)

0.2126 * R + 0.7152 * G + 0.0722 * B

2nd picture - http://www.w3.org/TR/AERT#color-contrast

0.299 * R + 0.587 * G + 0.114 * B

3rd picture - HSP Color Model

sqrt(0.299 * R^2 + 0.587 * G^2 + 0.114 * B^2)

4th picture - WCAG 2.0 SC 1.4.3 relative luminance and contrast ratio formula (see @Synchro's answer here)

Pattern can be sometimes spotted on 1st and 2nd picture depending on the number of colors in one row. I never spotted any pattern on picture from 3rd or 4th algorithm.

If i had to choose i would go with algorithm number 3 since its much easier to implement and its about 33% faster than the 4th.

Perceived brightness algorithm comparison

@CoffeDeveloper 2015-01-08 21:21:13

To me this is the best answer because oyu use a picture pattern that let you perceive if different hues are rendered with th same luminance. For me and my current monitor the 3rd picture is the "best looking" since it is also faster then 4th that's a plus

@Max 2017-11-10 17:32:05

Your comparison image is incorrect because you did not provide the correct input to all of the functions. The first function requires linear RGB input; I can only reproduce the banding effect by providing nonlinear (i.e. gamma-corrected) RGB. Correcting this issue, you get no banding artifacts and the 1st function is the clear winner.

@Mark Ransom 2019-03-07 17:57:47

@Max the ^2 and sqrt included in the third formula are a quicker way of approximating linear RGB from non-linear RGB instead of the ^2.2 and ^(1/2.2) that would be more correct. Using nonlinear inputs instead of linear ones is extremely common unfortunately.

@Dave Collier 2016-01-19 14:25:56

For clarity, the formulas that use a square root need to be

sqrt(coefficient * (colour_value^2))

not

sqrt((coefficient * colour_value))^2

The proof of this lies in the conversion of a R=G=B triad to greyscale R. That will only be true if you square the colour value, not the colour value times coefficient. See Nine Shades of Greyscale

@log0 2016-02-01 13:25:16

there are parenthesis mistmatches

@RufusVS 2016-06-22 18:56:01

unless the coefficient you use is the square root of the correct coefficient.

@Jive Dadson 2012-11-26 04:16:48

Below is the only CORRECT algorithm for converting sRGB images, as used in browsers etc., to grayscale.

It is necessary to apply an inverse of the gamma function for the color space before calculating the inner product. Then you apply the gamma function to the reduced value. Failure to incorporate the gamma function can result in errors of up to 20%.

For typical computer stuff, the color space is sRGB. The right numbers for sRGB are approx. 0.21, 0.72, 0.07. Gamma for sRGB is a composite function that approximates exponentiation by 1/(2.2). Here is the whole thing in C++.

// sRGB luminance(Y) values
const double rY = 0.212655;
const double gY = 0.715158;
const double bY = 0.072187;

// Inverse of sRGB "gamma" function. (approx 2.2)
double inv_gam_sRGB(int ic) {
    double c = ic/255.0;
    if ( c <= 0.04045 )
        return c/12.92;
    else 
        return pow(((c+0.055)/(1.055)),2.4);
}

// sRGB "gamma" function (approx 2.2)
int gam_sRGB(double v) {
    if(v<=0.0031308)
        v *= 12.92;
    else 
        v = 1.055*pow(v,1.0/2.4)-0.055;
    return int(v*255+0.5); // This is correct in C++. Other languages may not
                           // require +0.5
}

// GRAY VALUE ("brightness")
int gray(int r, int g, int b) {
    return gam_sRGB(
            rY*inv_gam_sRGB(r) +
            gY*inv_gam_sRGB(g) +
            bY*inv_gam_sRGB(b)
    );
}

@JMD 2013-03-21 16:21:46

Why did you use a composite function to approximate the exponent? Why not just do a direct calculation? Thanks

@Jive Dadson 2013-03-22 19:27:07

That is just the way sRGB is defined. I think the reason is that it avoids some numerical problems near zero. It would not make much difference if you just raised the numbers to the powers of 2.2 and 1/2.2.

@Jerry Federspiel 2014-10-02 13:22:34

JMD - as part of work in a visual perception lab, I have done direct luminance measurements on CRT monitors and can confirm that there is a linear region of luminance at the bottom of the range of values.

@DCBillen 2015-05-06 15:08:09

I know this is very old, but its still out there to be searched. I don't think it can be correct. Shouldn't gray(255,255,255) = gray(255,0,0)+gray(0,255,0)+gray(0,0,255)? It doesn't.

@rdb 2016-01-05 14:26:31

@DCBillen: no, since the values are in non-linear gamma-corrected sRGB space, you can't just add them up. If you wanted to add them up, you should do so before calling gam_sRGB.

@Jive Dadson 2016-11-22 09:02:40

@DCBillen Rdb is correct. The way to add them up is shown in the function int gray(int r, int g, int b), which "uncalls" gam_sRGB. It pains me that after four years, the correct answer is rated so low. :-) Not really.. I will get over it.

@Tim Kuipers 2019-04-30 07:05:15

Your gray function performs a gamma compression afterwards, but luminance itself is the uncompressed gray value, so if you want to compute the brightness then leave out the call togam_sRGB. If you just want to convert colors in order to display them black-and-white, then you should leave it in.

@Pierre-louis Stenger 2017-04-10 21:16:01

To determine the brightness of a color with R, I convert the RGB system color in HSV system color.

In my script, I use the HEX system code before for other reason, but you can start also with RGB system code with rgb2hsv {grDevices}. The documentation is here.

Here is this part of my code:

 sample <- c("#010101", "#303030", "#A6A4A4", "#020202", "#010100")
 hsvc <-rgb2hsv(col2rgb(sample)) # convert HEX to HSV
 value <- as.data.frame(hsvc) # create data.frame
 value <- value[3,] # extract the information of brightness
 order(value) # ordrer the color by brightness

@vortex 2017-03-11 07:15:56

I wonder how those rgb coefficients were determined. I did an experiment myself and I ended up with the following:

Y = 0.267 R + 0.642 G + 0.091 B

Close but but obviously different than the long established ITU coefficients. I wonder if those coefficients could be different for each and every observer, because we all may have a different amount of cones and rods on the retina in our eyes, and especially the ratio between the different types of cones may differ.

For reference:

ITU BT.709:

Y = 0.2126 R + 0.7152 G + 0.0722 B

ITU BT.601:

Y = 0.299 R + 0.587 G + 0.114 B

I did the test by quickly moving a small gray bar on a bright red, bright green and bright blue background, and adjusting the gray until it blended in just as much as possible. I also repeated that test with other shades. I repeated the test on different displays, even one with a fixed gamma factor of 3.0, but it all looks the same to me. More over, the ITU coefficients literally are wrong for my eyes.

And yes, I presumably have a normal color vision.

@Myndex 2019-06-19 23:28:37

In your experiments did you linearize to remove the gamma component first? If you didn't that could explain your results. BUT ALSO, the coefficients are related to the CIE 1931 experiments and those are an average of 17 observers, so yes there is individual variance in results.

@Franci Penov 2009-02-27 19:25:11

I think what you are looking for is the RGB -> Luma conversion formula.

Photometric/digital ITU BT.709:

Y = 0.2126 R + 0.7152 G + 0.0722 B

Digital ITU BT.601 (gives more weight to the R and B components):

Y = 0.299 R + 0.587 G + 0.114 B

If you are willing to trade accuracy for perfomance, there are two approximation formulas for this one:

Y = 0.33 R + 0.5 G + 0.16 B

Y = 0.375 R + 0.5 G + 0.125 B

These can be calculated quickly as

Y = (R+R+B+G+G+G)/6

Y = (R+R+R+B+G+G+G+G)>>3

@Beska 2009-02-27 20:39:10

I like that you put in precise values, but also included a quick "close enough" type shortcut. +1.

@Jonathan Dumaine 2010-12-18 01:01:43

How come your 'calculated quickly' values don't include blue in the approximation at all?

@Franci Penov 2010-12-18 01:24:42

@Jonathan Dumaine - the two quick calculation formulas both include blue - 1st one is (2*Red + Blue + 3*Green)/6, 2nd one is (3*Red + Blue + 4*Green)>>3. granted, in both quick approximations, Blue has the lowest weight, but it's still there.

@Jonathan Dumaine 2010-12-26 03:52:22

Hmm don't know why I didn't see the B's in there before.

@Christopher Oezbek 2012-05-24 16:39:59

@JonathanDumaine That's because the human eye is least perceptive to Blue ;-)

@milosmns 2015-01-05 00:34:27

The quick version works well. Tested and applied to real-world app with thousands of users, everything looks fine.

@rjmunro 2015-03-11 11:16:02

The quick version is even faster if you do it as: Y = (R<<1+R+G<<2+B)>>3 (thats only 3-4 CPU cycles on ARM) but I guess a good compiler will do that optimisation for you.

@Dave Collier 2016-01-19 14:21:24

The inverse-gamma formula by Jive Dadson needs to have the half-adjust removed when implemented in Javascript, i.e. the return from function gam_sRGB needs to be return int(v*255); not return int(v*255+.5); Half-adjust rounds up, and this can cause a value one too high on a R=G=B i.e. grey colour triad. Greyscale conversion on a R=G=B triad should produce a value equal to R; it's one proof that the formula is valid. See Nine Shades of Greyscale for the formula in action (without the half-adjust).

@Jive Dadson 2017-06-15 03:29:10

It sounds like you know your stuff, so I removed the +0.5

@Jive Dadson 2017-06-15 05:23:10

I did the experiment. In C++ it needs the +0.5, so I put it back in. I added a comment about translating to other languages.

@Anonymous 2009-02-27 19:25:59

Do you mean brightness? Perceived brightness? Luminance?

  • Luminance (standard for certain colour spaces): (0.2126*R + 0.7152*G + 0.0722*B) [1]
  • Luminance (perceived option 1): (0.299*R + 0.587*G + 0.114*B) [2]
  • Luminance (perceived option 2, slower to calculate): sqrt( 0.241*R^2 + 0.691*G^2 + 0.068*B^2 )sqrt( 0.299*R^2 + 0.587*G^2 + 0.114*B^2 ) (thanks to @MatthewHerbst) [3]

@Bob Cross 2009-02-27 19:28:50

Note that both of these emphasize the physiological aspects: the human eyeball is most sensitive to green light, less to red and least to blue.

@Anonymous 2009-02-27 19:34:34

Yes, it all depends on the application. All these models including human subjective perception...

@alex strange 2009-02-27 19:46:39

Note also that all of these are probably for linear 0-1 RGB, and you probably have gamma-corrected 0-255 RGB. They are not converted like you think they are.

@Anonymous 2009-10-06 08:45:15

For the first two the source is in the other answers. As for the final one - I think it was from the lectures on television or graphics...

@Nemi 2010-05-19 16:54:27

I think your last formula is incorrect. I get it returning black for a dark blue color.

@Jive Dadson 2012-11-26 03:43:48

Not correct. Before applying the linear transformation, one must first apply the inverse of the gamma function for the color space. Then after applying the linear function, the gamma function is applied.

@Jack 2013-04-04 12:41:29

Luminance only accounts for human spectral sensitivity. It does not account the for the human non-linear perception of Luminance, which the CIE have standardised as "Lightness" (the L in the CIELAB color space) Depending on your application you may need to first calculate Luminance, and then Lightness ... en.wikipedia.org/wiki/Lightness

@Synchro 2014-03-16 11:46:35

The first two are linear, the last one is a bit arbitrary and does not incorporate gamma correction or sRGB color space correction.

@Mark Ransom 2014-11-05 22:25:31

@alexstrange the first formula uses linear RGB values and the second uses gamma-corrected values. The first formula is more modern, the second dates from the invention of NTSC. And the range doesn't matter since there are no non-linear operations. The third formula appears to operate on gamma-corrected values, but as it isn't a standard it's harder to be sure.

@Franci Penov 2015-01-05 03:32:35

If the color components are limited to 8 bits, the perceived luminance option 2 can be calculated as fast as the option 1 with a 256 ints lookup table of the precomputed squares. And all three can be calculated really fast with a lookup table of each color value premultiplied with each coefficient. Though that does require a lot of memory.

@Kaizer Sozay 2015-07-17 04:25:49

In the last formula, is it (0.299*R)^2 or is it 0.299*(R^2) ?

@Stephen Smith 2016-08-17 18:00:18

Here's a jsfiddle that lets you pick colors and see the luminance, in case anybody else needs to see what kinds of values you get from these formulas. (Based on the last slowest formula) jsfiddle.net/sbrexep0

@Dantevg 2017-05-07 12:49:58

@KaizerSozay As it's written here it would mean 0.299*(R^2) (because exponentiation goes before multiplication)

@Slipp D. Thompson 2017-06-09 23:39:36

@bobobobo From the Wikipedia pages for en.wikipedia.org/wiki/Relative_luminance and en.wikipedia.org/wiki/Grayscale . It's important to note that the R, G, & B multipliers are not based on any exact science; rather, they're just values that have been found to be the average luma perceived by the human eye for each channel. This was crucially important when there existed both B&W (grayscale) & color TV sets; the testing of the population was done around that time (1950s). More background at en.wikipedia.org/wiki/Color_television .

@Arthur2e5 2017-09-11 04:22:27

When it comes to W3C on a11y, the newer WCAG guide specifies the 1st formula with suggestion for linearization: w3.org/TR/WCAG20-TECHS/G17.html#G17-tests

@Max 2017-11-10 17:56:23

@MarkRansom I have verified that the last uses gamma-corrected values. FWIW to my eye the first formula gives the best results, though it's expensive if your colors aren't in linear space. The last could be a nice hack if you don't want to undo the gamma.

@Mark Ransom 2017-11-10 18:31:17

@Max formula #3 has been modified, now it's the same as #2 with an approximation of gamma to linear conversion. It's using a gamma of 2.0 instead of 2.2 to make things quicker. If you're going to use linear RGB I'd use the constants from formula #1 instead, my experience matches yours that #1 with linear is the best.

@sitesbyjoe 2010-05-26 19:34:05

I found this code (written in C#) that does an excellent job of calculating the "brightness" of a color. In this scenario, the code is trying to determine whether to put white or black text over the color.

@RufusVS 2016-06-21 22:21:23

That is exactly what I needed. I was doing a classic "color bars" demo, and wanted to label them on top of the color with the best black-or-white choice!

@bobobobo 2013-04-03 18:09:17

Interestingly, this formulation for RGB=>HSV just uses v=MAX3(r,g,b). In other words, you can use the maximum of (r,g,b) as the V in HSV.

I checked and on page 575 of Hearn & Baker this is how they compute "Value" as well.

From Hearn&Baker pg 319

@Peter 2017-07-18 17:30:19

Just for the record the link is dead, archive version here - web.archive.org/web/20150906055359/http://…

@Myndex 2019-06-19 22:39:55

HSV is not perceptually uniform (and it isn't even close). It is used only as a "convenient" way to adjust color, but it is not relevant to perception, and the V does not relate to the true value of L or Y (CIE Luminance).

@dsignr 2012-02-24 05:27:38

This link explains everything in depth, including why those multiplier constants exist before the R, G and B values.

Edit: It has an explanation to one of the answers here too (0.299*R + 0.587*G + 0.114*B)

@Nils Pipenbrinck 2009-02-27 20:25:06

To add what all the others said:

All these equations work kinda well in practice, but if you need to be very precise you have to first convert the color to linear color space (apply inverse image-gamma), do the weight average of the primary colors and - if you want to display the color - take the luminance back into the monitor gamma.

The luminance difference between ingnoring gamma and doing proper gamma is up to 20% in the dark grays.

@Jacob 2009-02-27 19:40:56

The 'V' of HSV is probably what you're looking for. MATLAB has an rgb2hsv function and the previously cited wikipedia article is full of pseudocode. If an RGB2HSV conversion is not feasible, a less accurate model would be the grayscale version of the image.

@Gandalf 2009-02-27 19:34:54

RGB Luminance value = 0.3 R + 0.59 G + 0.11 B

http://www.scantips.com/lumin.html

If you're looking for how close to white the color is you can use Euclidean Distance from (255, 255, 255)

I think RGB color space is perceptively non-uniform with respect to the L2 euclidian distance. Uniform spaces include CIE LAB and LUV.

@Ian Hopkinson 2009-02-27 19:28:08

The HSV colorspace should do the trick, see the wikipedia article depending on the language you're working in you may get a library conversion .

H is hue which is a numerical value for the color (i.e. red, green...)

S is the saturation of the color, i.e. how 'intense' it is

V is the 'brightness' of the color.

@Ian Boyd 2010-05-06 14:22:18

Problem with the HSV color space is that you can have the same saturation and value, but different hue's, for blue and yellow. Yellow is much brighter than blue. Same goes for HSL.

@user151496 2014-03-15 17:24:46

hsv gives you the "brightness" of a color in a technical sense. in a perceptual brightness hsv really fails

@Myndex 2019-06-19 23:25:12

HSV and HSL are not perceptually accurate (and it's not even close). They are useful for "controls" for adjusting relative color, but not for accurate prediction of perceptual lightness. Use L* from CIELAB for perceptual lightness.

@Ben S 2009-02-27 19:22:36

Please define brightness. If you're looking for how close to white the color is you can use Euclidean Distance from (255, 255, 255)

@Myndex 2019-06-19 23:32:07

No, you can't use euclidian distance between sRGB values, sRGB is not a perceptually uniform Cartesian/vector space. If you want to use Euclidian distance as a measure of color difference, you need to at least convert to CIELAB, or better yet, use a CAM like CIECAM02.

Related Questions

Sponsored Content

50 Answered Questions

[SOLVED] RGB to Hex and Hex to RGB

6 Answered Questions

[SOLVED] Algorithm to modify brightness for RGB image?

20 Answered Questions

[SOLVED] Determine font color based on background color

13 Answered Questions

[SOLVED] Algorithm for Additive Color Mixing for RGB Values

  • 2009-04-07 16:25:27
  • Gaidin
  • 78434 View
  • 69 Score
  • 13 Answer
  • Tags:   algorithm colors

14 Answered Questions

[SOLVED] How can I get the iOS 7 default blue color programmatically?

2 Answered Questions

1 Answered Questions

Formula to increase brightness of RGB?

  • 2018-06-19 22:04:40
  • UnseededAndroid
  • 3876 View
  • 2 Score
  • 1 Answer
  • Tags:   image rgb gamma

1 Answered Questions

[SOLVED] Benchmark algorithm to change perceptive brightness?

Sponsored Content