The concepts of signal and image processing create so many difficult but important problems for practitioners and users because of the issues that arise when moving from the continuous domain of the real world where there is an "infinite" amount of information to the discrete domain of the computer where there is only a finite amount of data.

When moving from the continuous realm to the discrete realm of the computer, in addition to sampling only at a finite number of spatial locations as demonstrated in the Introduction applet, computers "quantize" values reducing them to a (small) finite set. Here, we demonstrate quantization by having students draw a "continuous" function and then quantize it, showing the quantities version as well as a graph of the error from the quantization process.


Anyone interested in computer graphics and the changes continuous signals incur as they are "brought into" the digital world.


To get a sense of the effects of quantization error, try inputting a signal and setting the number of levels to which the signal is quantized. Observe the resulting signal and the difference between the input and output signals. Try drawing a signal whose quantized version is constant even though the original signal varies. Also, try drawing a signal which only varies a little bit, but whose quantized version appears to vary significantly (between the two quantization levels).