The color yellow

 from Red Blob Games
Feb 2017, updated Apr 2019

Imagine a world in which everyone but you is color blind. You notice people wearing mismatched clothes. They use the same name for different colors. It’d be a bit weird. It turns out everyone really is color blind, but since everyone else is too, nobody notices. There’s a whole lot I could say about this but in this post I want to focus on the color yellow.

This page is unfocused and unfinished. Read Section 1. Feel free to skip the rest.

 1  Yellow#

The eyes has “red”, “green”, and “blue” cones, and our screens and cameras have red, green, and blue as well. That’s three primary colors of light. This gives us the classic color wheel implemented in software:

But ask any kid — there are four primary colors: red, yellow, green, blue.

Look at the distribution of the original 8 Crayola colors[1]: red, yellow, brown, orange, green, blue, violet, black. Of these, there are three colors in the red—green range, and none in the green—blue range. Artist colors[2] are unevenly distributed too. It’s not just English — across many languages, red, yellow, green, blue are the first four colors.

The commonly used colors are unevenly distributed around the R-G-B color wheel.

Why?

The short answer is that yellow is a primary color too.

There are cells in the eye that combine the raw red+green+blue signals into red+green+yellow+blue sent to the brain! This is called opponent process theory[3]. It explains why you can mix two adjacent colors (red + yellow = orange, yellow + green = lime, green + blue = cyan, blue + red = purple) but you can’t mix opposite colors (red + green = ??, blue + yellow = ??). Here’s a color wheel closer to what we actually see:

Isn’t this amazing? We’ve been taught there are three primary colors, and there are, from a certain point of view, but our brain actually sees four primary colors!

Of course the reality is that biology is far, far messier than this. Not only are cultural and environmental reasons for emphasizing certain colors, evolved biology is rather messy and doesn’t quite line up with the simple four color view of the world. The more you dig into this topic, the messier it gets. Alas.

On the rest of this page I wanted to explore an additional question: even with four primary colors, why do we still have more color names in the red—green range than in the green-blue or blue-red ranges? My hypothesis is that it’s because the red and green cones in the eye overlap in their response curves, and that allows much more color sensitivity. However, I wasn’t able to come to any good conclusions. The rest of this page is my notes and experiments. Status: abandoned.

 2  Vision#

This is the wavelength response for the three types of cones. The three curves are called luminosity curves. I plotted the data I downloaded from here[4]; it seems to match the chart on wikipedia[5]:

Any light coming in at a wavelength detected by one of the sensors will activate that receptor. For example, a red wavelength like 650nm will activate the red receptor (L) a lot, the green receptor (M) a little, and the blue receptor (S) not at all. If a receptor is only slightly sensitive to some frequency, it will only activate weakly. An infrared light wave will activate the red receptor weakly, and it will look to us like dark red.

We’re used to writing colors with RGB values, but there are many ways of writing colors (HSL, HSV, CIE, Lab, etc.). If we use the receptor values we’re using the LMS color space[6].

One oddity to notice is that there’s no way to trigger green (M) without also triggering red (L). The high overlap between green and red is what I believe will explain why have more color names there. When these curves overlap little, such as with S, a dim light at S’s peak wavelength will look just like a bright light at S’s non-peak wavelength. Furthermore the wavelengths higher than the peak and lower than the peak produce very similar signals (“principle of univariance”). When the curves overlap a lot, such as with L and M, a wavelength higher than M’s peak will produce a larger L and smaller M, whereas a wavelength lower than M’s peak will produce a smaller L and a smaller M. We get a different combination of signals by going up or down from the peak wavelength.

The raw signals received by the brain are combined into these signals:

  1. L+M+S: this forms a bright vs dark axis
  2. S-L-M: this forms a blue vs yellow axis
  3. L-M: this forms a red vs green axis
  4. (L-M) - (M-L): “double opponent” cell that reinforces red vs green
  5. (L-M-S) - (S+M-L): “double opponent” cell that forms a red vs cyan axis

This resolves the mismatch between Helmholtz’s three primary color theory (red, green, blue, which we see in eyes, cameras, screens) and Hering’s four primary color theory (red, yellow, green, blue, which we see in common color names, board games, logos, etc.).

 3  Spectral colors#

To test the formulas I found on wikipedia, I generated a spectrum image:

For each wavelength, I transform it to L, M, S values, then convert LMS to the CIE XYZ color space, then convert XYZ to RGB. For testing purposes I also compared this to wavelength→XYZ→RGB and wavelength→RGB. When I used LMS I was unable to produce violet, but when I converted wavelength→XYZ or wavelength→RGB I got violet.

Other than violet, this image matches the reference spectrum I found elsewhere, so I proceeded with my calculations.

 4  Signal changes across wavelengths#

My hypothesis was that the changes in L,M,S values should correspond to the change in perceived color. In particular, it should change a lot in the yellow-orange region because of the overlap of the L and M curves, and very little in the blue region, where the S curve has low overlap with the others. I normalized L,M,S and then plotted how much L,M,S changed every time I changed the wavelength by 1, and compared that to the reference image on Wikipedia[8]:

As I had hoped, it is higher in the yellow-orange, but it is even higher in the cyan region (not what I expected). In the blue and red zones there’s not much change in L,M,S across wavelengths.

Here’s the change in Lab space color at each wavelength. Since distances in Lab color is supposed to correspond to perceptual differences, this chart shows how much the change in wavelength affects the perceived color:

It has some similarities to the LMS chart but generally does not match. A noticeable difference is that it shows the transition from blue to violet is perceivable, whereas my LMS calculations don’t show violet at all.

My hypothesis didn’t hold up.

 5  Diagram - two frequencies#

I tried this and it didn’t seem interesting

The colors we see are the light source multiplied by the reflected color multiplied by the eye’s cones.

Most colors will activate more than one receptor. For example, orange light will activate the red and green receptors. When we see “orange” it’s because our brain interprets simultaneous red and green as orange. When we see “yellow” it’s also because of simultaneous red and green, but in a different ratio.

More than one frequency can come in at the same time. If both a red and green frequency come in at the same time, they will activate both the red and green receptors, which will make the light look “orange”. However, the light waves may contain no orange frequencies. This is what I mean when I say that we are all color blind. We can’t distinguish orange light from a mixture of red and green.

Static diagram, two dimensions: frequency on x-axis and frequency on y-axis. Calculate the image by multiplying. Option: adjust position of green, red, blue left/right to see effect of red-green colorblindness, different kinds of colorblindness, tetrachromats

In reality objects will reflect a lot more than two frequencies.

 6  Appendix: Calculations#

 6.1. XYZ to Lab conversion

https://en.wikipedia.org/wiki/Lab_color_space#Forward_transformation[9]

function xyz_to_lab(XYZ) {
    function f(t) {
        const δ = 6/29;
        return t < Math.pow(δ, 3)? Math.pow(t, 1/3) : t / (3*δ*δ) + 4/29;
    }
    const {X, Y, Z} = XYZ;
    const Xn = 95.047, Yn = 100.0, Zn = 108.883;
    const L = 116 * f(Y/Yn) - 16;
    const A = 500 * (f(X/Xn) - f(Y/Yn));
    const B = 200 * (f(Y/Yn) - f(Z/Zn));
    return {L: L, A: A, B: B};
}

Color similarity in Lab space is just distance! https://en.wikipedia.org/wiki/CIELUV[10] and hue is atan2(v, u)! But UVW space is also distance. https://en.wikipedia.org/wiki/CIE_1964_color_space[11]

cieluv (u*, v*) or cielab (a*, b*) need to be centered at white point

 6.2. Hue

https://stackoverflow.com/a/11857601[12] gives an approx wavelength to hue function, H = (620 - L) * 270 / 170

Rainbow colors: Red 640 Orange 590 Yellow 580 Green 520 Blue 450 Violet 420

atan2 of lab colors are not hsl hue; it’s 4-colored hue instead of 3-colored hue

 6.3. Color distance

CIE XYZ space gave us comparisons between colors (brighter/darker, more/less saturated, redder/bluer) but not distances. CIE LUV and CIE Lab give distances too. Euclidean distance in CIE Lab color space is a reasonable approximation of perceptual difference but they later came up with better approximations of distance that make Lab no longer the ideal space. https://en.wikipedia.org/wiki/Color_difference#CIE76[13]

Lab is good enough for me, I think.

 6.4. Spectrum data sources

Reference spectrum from stackoverflow[14] (wavelength to rgb): Wavelength→LMS→XYZ→RGB, using the Von Kries matrices on wikipedia: Wavelength→LMS→XYZ→RGB, using the CIECAM97 matrices on wikipedia:

The differences were minor and it didn’t matter for my purposes.

 6.5. LMS to XYZ conversion

The wikipedia:LMS[15] page only has the mapping from XYZ to LMS:

/** {X,Y,Z} to {L,M,S} conversion */
function xyz_to_lms(XYZ) {
    const {X, Y, Z} = XYZ;
    const L =  0.7328*X + 0.4296*Y - 0.1624*Z;
    const M = -0.7036*X + 1.6975*Y + 0.0061*Z;
    const S =  0.0030*X + 0.0136*Y + 0.9834*Z;
    return {L: L, M: M, S: S};
}

I can ask Emacs calc to invert that matrix:

inv([[0.7328, 0.4296, -0.1624] [-0.7036, 1.6975, 0.0061] [0.0030, 0.0136, 0.9834]])
[[1.09612382084, -0.278869000218, 0.182745179383], [0.454369041975, 0.473533154307, 0.0720978037172], [-9.62760873843e-3, -5.69803121611e-3, 1.01532563995]]

That gives me this function:

/** {L,M,S} to {X,Y,Z} conversion */
function lms_to_xyz(LMS) {
    const {L, M, S} = LMS;
    const X = 1.09612382084*L - 0.278869000218*M + 0.182745179383*S;
    const Y = 0.454369041975*L + 0.473533154307*M + 0.0720978037172*S;
    const Z = -9.62760873843e-3*L - 5.69803121611e-3*M + 1.01532563995*S;
    return {X: X, Y: Y, Z: Z};
}

Von Kries 1:

inv([[0.38971, 0.68898, -0.07868] [-0.22981, 1.18340, 0.04641] [0, 0, 1]])
[[1.91019683405, -1.11212389279, 0.201907956768], [0.370950088249, 0.629054257393, -8.05514218436e-6], [0., 0., 1.]]

Von Kries 2:

inv([[0.4002, 0.7076, -0.0808] [-0.2263, 1.1653, 0.0457] [0, 0, 0.9182]])
[[1.86006661251, -1.1294800781, 0.219898303049], [0.361222924921, 0.638804306467, -7.12750153053e-6], [0., 0., 1.08908734481]]

first CIECAM97:

inv([[0.8951, 0.2664, -0.1614] [-0.7502, 1.7135, 0.0367] [0.0389, -0.0685, 1.0296]])
[[0.986992905467, -0.147054256421, 0.159962651664], [0.432305269723, 0.518360271537, 0.0492912282129], [-8.52866457518e-3, 0.0400428216541, 0.968486695788]]

second CIECAM97:

inv([[0.8562, 0.3372, -0.1934] [-0.8360, 1.8327, 0.0033] [0.357, -0.0469, 1.0112]])
[[0.930752207661, -0.166680465649, 0.178557676521], [0.425125853562, 0.469465316214, 0.0797766065421], [-0.308880672076, 0.080619906613, 0.929585079439]]
function lms_to_xyz_matrix(T) { 
    return function(LMS) {
        const {L, M, S} = LMS;
        return {X: T[0][0]*L + T[0][1]*M + T[0][2]*S,
                Y: T[1][0]*L + T[1][1]*M + T[1][2]*S,
                Z: T[2][0]*L + T[2][1]*M + T[2][2]*S};
    }
}

let lms_to_xyz_vonkries1 = lms_to_xyz_matrix([[1.91019683405, -1.11212389279, 0.201907956768], [0.370950088249, 0.629054257393, -8.05514218436e-6], [0., 0., 1.]]);
let lms_to_xyz_vonkries2 = lms_to_xyz_matrix([[1.86006661251, -1.1294800781, 0.219898303049], [0.361222924921, 0.638804306467, -7.12750153053e-6], [0., 0., 1.08908734481]])
let lms_to_xyz_ciecam97a = lms_to_xyz_matrix([[0.986992905467, -0.147054256421, 0.159962651664], [0.432305269723, 0.518360271537, 0.0492912282129], [-8.52866457518e-3, 0.0400428216541, 0.968486695788]])
let lms_to_xyz_ciecam97b = lms_to_xyz_matrix([[0.930752207661, -0.166680465649, 0.178557676521], [0.425125853562, 0.469465316214, 0.0797766065421], [-0.308880672076, 0.080619906613, 0.929585079439]])
let lms_to_xyz_ciecam02 = lms_to_xyz;

 6.6. Frequency to XYZ conversion

From this data[16] - x,y,z per wavelength - “CIE color matching functions”

When I calculate the spectrum from wavelength→LMS→XYZ→RGB I am missing violet, but this wavelength→XYZ→RGB approach gives me violet.

 6.7. XYZ to RGB conversion

From wikipedia:CIEXYZ[17]:

/** {X,Y,Z} to {R,G,B} conversion from https://en.wikipedia.org/wiki/CIE_1931_color_space */
function xyz_to_rgb(XYZ) {
    const {X, Y, Z} = XYZ;
    const R = 0.41847*X - 0.15866*Y - 0.082835*Z;
    const G = -0.091169*X + 0.25243*Y + 0.015708*Z;
    const B = 0.00092090*X - 0.0025498*Y + 0.17860*Z;
    return {R: R, G: G, B: B};
}

“Along the same lines, the relative magnitudes of the X, Y, and Z curves are arbitrary.” – so I need to apply my own multiplier for each, which is what these sliders are for:

S M L

I ended up setting weights of S=1, M=3, L=2. These made the cyan and yellow bands match the reference image I got from stackoverflow.

 6.8. TODO Plot XYZ in 2d space

This should tell me whether the frequency to XYZ matches the CIE 1931 diagram

x = X / (X+Y+Z) y = Y / (X+Y+Z) z = Z / (X+Y+Z)

 6.9. TODO Luminous efficiency curve

Do I need to use this somehow? http://cvrl.ioo.ucl.ac.uk/cvrlfunctions.htm[18]

Yes, I think so — https://en.wikipedia.org/wiki/CIE_1931_color_space#Color_matching_functions[19] says I need to multiply the values by the spectral radiance.

 7  Appendix: Data#

 8  Appendix: Cross-browser issues#

I ran into two issues on this page. I’m using this structure to put axes on the bitmap data:

<svg>
  <g> ... axes ... </g>
  <image href="data:image/png;base64,iVBORw0KG…"/>
</svg>
  1. In Firefox, I needed to include the size of the image. <image width="…" height="…" href="…"/>
  2. In Safari, I needed to use a namespace. <image xlink:href="…"/>. This may be related to SVG1 vs SVG2[23] differences.

 9  Appendix: more reading#

  1. The animal kingdom has four types of cones, roughly detecting infrared, brown, blue, and ultraviolet; see the diagram on the bird vision page of wikipedia[24]. Mammals have the brown and blue cones. Primates have the brown split into green and red (around 30-40 million years ago); see wikipedia[25]. The L and M cone genes are similar and sometimes merge during meiosis, producing red-green color-blindness. Women experience red-green color blindness far less than men because the L and M genes are on the X chromosome, and if they merge on one X chromosome, women have a second X chromosome where they may not have merged. Human “tetrachromats” have an additional brown cone, similar to M and L. It should increase color sensitivity a little bit but my guess is that it’s nowhere near as useful as a second blue cone would’ve been (turns out: most tetrachromats don’t see extra colors but some do[26])
  2. The cone cells have both spatial and temporal effects. The temporal effects can be experienced when you stare at a color picture for 30 seconds and then see an afterimage[27] when you look away.
  3. The red cone can pick up some infrared, and if you block out all other wavelengths you can see near infrared[28]. Another trick with blocking out wavelengths is to block out the wavelengths where L and M cones overlap. Watch Ethan see purple for the first time[29]. This suggests that many red-green colorblind people actually have both L and M cones, but they’re overlapping too much for the brain to pick up the L-M signal. The glasses essentially “increase contrast” for the L-M red-green signal to the point where many red-green colorblind people can distinguish red from green!
  4. The all three cones can pick up a little bit of ultraviolet, but most of the ultraviolet is blocked by the lens of the eye. The S cone picks up ultraviolet slightly more than L or M. As a result, ultraviolet-reflecting objects (such as t-shirts or shoelaces) can look like a bluish white.
  5. The color you see is the sum over all wavelengths of { the product of the light source’s wavelength multiplied by the object’s color at that wavelength }. As a result, the same object can look different colors under different lights. The brain tries hard to autocorrect for this but cameras have to be told which “white balance” to use. My red car looks gray under single-frequency yellow streetlights.
  6. The CIE XYZ diagram shows how X and Y are related, and how many combinations of X,Y,Z can’t even be seen by humans. The magentas don’t correspond to any wavelength of light.
476px-CIExy1931.svg.png

 10  Appendix: References#

Email me , or tweet @redblobgames, or comment: