[ Beneath the Waves ]

Basic Greyscale Images

article by Ben Lincoln

 

Greyscale (often referred to as "black and white") images are the basic building block of all imaging systems, from the human eye to the most exotic gamma ray cameras, radio telescopes, and 3D sonar devices. Each pixel is one-dimensional - it represents a position along a single scale. That scale may be a simple linear representation of how much red light is reflected by an object, or a complex mathematical relationship between the how that single point responds to the the spectral bands that many different sensors capture, but it can by its nature only convey one particular piece of information at a time to whoever is viewing it.

The Image Cube

In multispectral and hyperspectral imaging systems, the set of data collected by the sensors is commonly known as the "image cube". The image cube is a three-dimensional "stack" of two-dimensional greyscale images[1].

An Example Image Cube
[ An Example Image Cube ]
An Example Image Cube
       

A simple five-plane image cube, of the type I create for my photography.

 

The number of "slices" in the cube is limited only by the technology of the device(s) which generate it. For example, JPL's OnEarth website makes available satellite data that can be used to create cubes of between eight and eleven bands, depending on how they're being counted.

Basic Greyscale

The most basic use of greyscale in imaging is to represent how much of a given type of electromagnetic radiation[2] was received in that position by the sensor. Usually, this is based on how reflective the object being imaged is in that part of the electromagnetic spectrum. For example, a red hot-air balloon photographed with a digital camera appears red in the photograph because its surface reflects red light from the sun into the parts of the camera's sensor which are sensitive to the brightness of red light. However, this is not always the case. X-ray images are created by "backlighting" the subject with a source of X-rays, and so the result is a representation of how transparent or opaque the subject is to X-rays, not how reflective of them the subject is. Thermal imagers capture the far-infrared radiation produced by the subject matter itself, instead of reflected energy.

Some devices respond to other aspects of the electromagnetic radiation they collect, either instead of, or in addition to its intensity. Human eyes can't perceive phase or polarization[3], but both of these values can be extremely useful. If you have a digital camera, it most likely uses the phase of light for its autofocus function. Active sensors (such as radar and sonar) can measure not only the reflectivity of the subject matter, but also its distance and relative velocity. All of these things can be represented (individually) as a position on a scale, and therefore as a greyscale image[4].

Moving far beyond the familiar red, green, and blue intensity can reveal surprising aspects of a familiar scene. Here is a set of greyscale images from the aforementioned OnEarth website, which depicts New York City in eight spectral bands (blue, green, red, near-infrared, two mid-infrared bands, thermal infrared, and C/X-band microwave radar from the Shuttle Radar Topography Mission), as well as the elevation, which was captured by the SRTM system in addition to the raw reflectivity.

OnEarth Satellite Imagery of New York City
[ Blue ]
Blue
[ Elevation ]
Elevation
[ Green ]
Green
[ IR1 ]
IR1
[ IR2 ]
IR2
[ IR3 ]
IR3
[ Red ]
Red
[ SRTM ]
SRTM
[ Thermal ]
Thermal

 

 

Note that in the SRTM (microwave radar) image, there are several curious gaps in Manhattan Island. I initially assumed these were due to artifacts of the radar data processing (and some of them may very well be). However, the southern tip of the island at least was captured accurately. Examine very old maps of the island, and you will see that this area is artificial fill added over the last 300-400 years. In addition, notice that "soft" areas like Central Park are less reflective of C/X-band radar than "hard" areas like the downtown core.

Moving Beyond The Basics

Populating the image cube with this basic data lays the foundation for derived greyscale images (which are covered in the next two articles - Calculated Greyscale Images and Statistical Greyscale Images), as well as false colour images of several types (discussed in the remainder of the articles in this section).

 
Footnotes
1. Users of business intelligence and database software may already be familiar with the term "cube" to represent a three-dimensional representation of data. The usage is the same, except that in the context of imaging, each point inside the cube represents a single pixel's value, from low to high, and the Z-axis (depth) typically represents a difference in spectral band or sensor type, instead of movement through time.
2. Or sound, in the case of sonar, et cetera.
3. Our eyes are not directly sensitive to the polarization of light, but due to a quirk in their physical structure, it is possible for most people to learn to indirectly perceive polarization to some degree - see Haidinger's Brush.
4. Representing a value such as phase, hue, or certain types of polarization as a value on a scale from low to high can give confusing or unintuitive results because these values are circular. That is, a phase of 359 degrees is 358 degrees "higher" than a phase of 1 degree, but they are also only 2 degrees apart, because the scale wraps back around to zero. Representing them in greyscale can mask relationships like this, vaguely like how a two-dimensional map of the world distorts the relative size of continents which are at different latitudes. If two sets of this type of data are compared, this limitation can be overcome by representing the relative difference between them instead of an absolute value. This type of comparison is discussed in the next article (Calculated Greyscale Images).
 
[ Page Icon ]