Previous Page


So how do these ideas combine?

Let's look at what happens when we illuminate two different materials with the same light? The fifth illustration lets us take a look at that question: you can sketch the incoming light and the reflectances of the two surfaces, and see the product, which is the light that will return from the surface and enter the eye. At the bottom, we can see the three "total responses" to the light reflecting from the two surfaces. As an exercise, draw an incoming light distribution, and then build two different reflectance functions that generate the same "results" at the bottom. Make the reflectance functions look as different as possible while still generating the same results. Then vary the "incoming" light and see how the results vary and differ. This shows that two materials that look alike under one light may look very different under another light.

What about using RGB colors in computer graphics?

Folks often think that because there are three types of color-sensing cells in the eye, we should be able to describe everything about color in just three numbers, and therefore assign each color a "red", "green", and "blue" intensity. There's an assumption that if you say how much red, how much green, and how much blue there is in a colored light, you've told the whole story. In the same way, folks think that you can describe the color of an object by saying how much red, green, and blue it reflects when white light illuminates it, and that this tells the whole story about the material. The final step in this logic is to say that if you know the RGB components of the illuminating light, and the RGB colors of the surface, you can compute the RGB color of the reflected light by multiplying corresponding numbers; thus a material whose "RGB color" is (0.7, 0.5, 0.5) illuminated by a light whose "RGB color" is (1.0, 0.5, 1.0) is supposed to reflect light whose color is (0.7, 0.25, 0.5). From what we've seen above, we know that the spectrum of the reflected light () can be written in terms of the spectrum of the incoming light () and the reflectance function of the material ():

The resulting red, green, and blue perceptual values are gotten by integrating this distribution against the red, green, and blue filter functions for the three types of cells:

The "RGB" school of thought would say that the Red, Green, and Blue values should be computed by taking the product of the red component of the light with the red component of the surface:

and similarly for green and blue. It should be fairly clear than unless the filter functions , and are very simple in form, that these two values will not agree. The sixth illustration lets the reader experiment with this explicitly. The reader gets to sketch the incoming light spectrum, and the reflectance of a material. To the right of these, the "RGB" values for each are computed. Beneath them, the product spectrum is computed, and to its left are the RGB values associated with this product spectrum (i.e., what we see). To the right is the product of the incoming and reflectance RGB values. For most "reasonable-looking" spectra and reflectances, the two are quite close, but it's also easy to create cases where they are very different. Try illuminating with a light that's made up of mostly high-frequency greens, and make the material have high reflectance only in the region of low-frequency greens (the yellow zone). Compare the product of the RGB values to the RGB values for the product.

That's not all

Evidently taking just three numbers to represent the infinite-dimensional thing that is the spectrum of incoming light (or the reflectance) is just plain inadequate, at least theoretically. In fact, the issue of different things looking different under different lights is significant, and there are some who feel that paintings should be viewed under the light used when they were created, so that impressionist paintings, for example, should only be viewed as illuminated by sunlight. But for computer graphics, it's evidently impractical to store the light intensity at every possible frequency, so we have to compromise. The usual compromise is to use three values, but there's a school that says that taking five spectral values really works well enough for all practical purposes. There's lots more to know about this, and Roy Hall's book (Illumination and Color in Computer Generated Imagery) is a great place to find out about it.

There's another question you can study that has a surprising answer. Suppose you took all possible mono-spectral lights, and computed the three "responses" associated with them. You could call these three responses , and . If for each you plotted the point , , in 3-space, you'd get an arc of points corresponding to wavelengths between 700 nm and 400 nm (which are the longest and shortest wavelengths we can see, more or less). This arc, if you actually drew it, would turn out to have the shape of a horseshoe. (If you turned the lights down a bit, you would get a smaller horseshoe -- closer to the origin -- and if you turned them up, you'd get a larger one. I'm going to assume that all lights in the rest of this discussion are adjusted so that the total stimulus -- r + g + b -- is always the same.)

The horseshoe shape in itself is not surprising. But suppose now that you have a color monitor that has three "color guns" that can turn on a certain amount of red, green, or blue at each pixel. Suppose that the "red" that the first gun turns on is nearly an ideal red -- it's right at the 700 nm end of the horseshoe. And suppose that the blue is right at the 400 nm end of the horseshoe, and that the green is right at the bend of the horseshoe. Then any color you can create by turning on a combination of these will lie somewhere in the triangle whose vertices are those three points. It doesn't take much to see that some pure spectral colors cannot be reproduced by your monitor, even though it's nearly an ideal monitor. In fact, most monitors have a red that's not really at one end of the horseshoe, and a blue that's not really at the other, and a green that's not quite at the edge of the horseshoe, so the "gamut" of screen colors is a relatively small triangle within the larger "filled-in-horseshoe-shape" gamut of all possible colors.

Worse still, your printer has the same problem -- it's got various inks that can reflect white light in various ways, but once again, combinations of those inks lead to colors inside some convex polygon inside the horseshoe, and it's probably not the same polygon as your monitor. So if you craft an ideal image on your monitor, it may be ugly when you print it! Those are the grim facts of life about color in the computer graphics world, and they've all been based on the generous assumption that everything in sight is linear, which turns out to be overoptimistic, alas. Illustrations showing the horseshoe shape are under construction, and may appear here at some time in the future...

Previous Page


Questions or feedback on these illustrations should be sent to John F. Hughes.
Questions on the Java source should be sent to Adam Doppelt.