[ Beneath the Waves ]


article by Ben Lincoln


This article is the first in a series covering the equipment I use for multispectral photography, as well as other possible options. I will generally attempt to provide a high-level theoretical overview where possible, including other possible designs that the theory allows for.

Broad-Band Camera with Filters in Front of the Lens

This is the most straightforward approach (especially when repurposing conventional photographic equipment). A camera (including lens) is selected in which the sensor responds more or less equally across the entire spectral range to be imaged. The light entering the camera is controlled via bandpass filters; for example, the near infrared image is produced by placing a filter in front of the lens which allows near infrared light to pass, but blocks shorter wavelengths. Multiple exposures are required in order to create a colour (or false colour) image, because each exposure only allows one band (or "channel") through.

Multispectral Imaging Using Bandpass Filters
[   ]

The most common method of multispectral imaging. The camera is (more or less) equally sensitive to all frequencies across a band of the spectrum. The light entering the camera is bandpass-filtered to select the band to be imaged. Multiple exposures (with different filters) are required to create colour images.


NASA uses this design frequently - the Mars rovers were equipped with greyscale broadband cameras, and in front of the lenses were placed motorized filter wheels which allowed sets of multispectral data to be quickly produced. Some of my very first false colour experiments were performed using the raw greyscale images from the rovers, which NASA has made available for download.

Certain specialized studio cameras designed to photograph static objects (merchandise, etc.) use this design to provide extremely accurate colour within the human-visible realm.

My photography is based on a slight variation of this approach. All digital cameras are sensitive to near infrared light (so much so that they incorporate an NIR-blocking filter in their optical path). The sensors of Nikon digital SLRs are also sensitive to ultraviolet-A light. This allows a Nikon DSLR (in my case, a secondhand D70) to serve as the "broadband camera" component. There is one major difference, which is that (because it was designed for conventional photography), a standard digital SLR incorporates a "Bayer-pattern filter" over the sensor, meaning that instead of having to create separate exposures for red, green, and blue light, those elements can be captured as a single exposure, requiring only three shots (near infrared, human-visible light, and ultraviolet-A) instead of five.

In order to obtain the best results, it is necessary to remove the aforementioned NIR-blocking filter from the optical path and replace it with an equally-sized piece of clear glass[2]. A few manufacturers produce specialized models which leave the factory in this state; the Fuji S3 Pro UVIR is the main example. There are numerous third parties who can perform this conversion on a standard DSLR - usually their customers are forensic photographers and scientists, so rates can be a little expensive. It is certainly possible to perform the conversion yourself, if you are comfortable peforming detailed work inside complex electronic devices. Of course, anything other than factory construction/conversion will immediately void the warranty on the camera, and it is very easy to inflict significant damage attempting to perform the conversion. Be aware that this conversion will mean that forever after, taking regular photos will require that you fit a human-visible light bandpass filter in front of your lenses.

I'm a DIY sort of person, so of course I performed the conversion myself on a secondhand D70. There are several guides to this process online, and I don't have any specific recommendations on which one to use. If you are determined to do the same thing, be sure to perform it in as dust-free an environment as you can find. A cleanroom would be ideal of course, but since I don't have access to one I used a table in the raised-floor server room in the building where I work. Remember that any dust caught between the piece of glass and the sensor will be a thorn in the side of your photos, and the Bayer filter is too delicate to clean using anything other than gentle puffs of air. Also be aware that the NIR-blocking filter is the light-blue piece of glass that looks like a small microscope slide; the shimmering grid is the Bayer filter, and attempting to remove it will destroy the sensor.

I chose the D70 because it was the best option available on a limited budget. Newer Nikon models would provide numerous advantages, including LiveView (which would be ideal for non-human-visible light, as the viewfinder can't be used), full-frame sensors, self-cleaning sensors (a huge benefit if yours is a dust magnet like mine), and higher sensitivity[5]. The D3S in particular (with its incredible jump in light-sensitivity) should be enormously useful due to the much "slower" speed when shooting ultraviolet-A through standard lenses. However, in early 2011, Andrea G. Blum ("annedi" on the Nikon Gear forums) made a most unfortunate discovery: the D3, D3S, D3X, D700, and D7000 all use a near infrared "shutter monitor" system inside the camera body, which contaminates any image shot on a multispectrally-converted body of those types. You read that right: as of this time, all of Nikon's best bodies are unusable for this type of photography. How sad is that?[6]

Although I have not tested it myself, the Sigma SD-1, SD-14, and SD-15 provide an interesting alternative to the "crack the camera open" modification: their NIR-blocking filters are incorporated in a removable section of the lens bayonet mount, meaning that it can be added or removed in the field. These cameras also uses the Foveon sensor, which eliminates the need for a Bayer filter entirely (see below). I am most curious about its UVA performance, although according to Shane from Beyond Visible, it is extremely noisy.



Multiple, Synchronized Cameras

This is a fairly simple variation on the previous technique. Instead of one camera with interchangeable filters in front of the lens, a separate camera for each spectral band is mounted in a large frame or case. Sergei Mikhailovich Prokudin-Gorskii used the human-visible light equivalent of this to take thousands of colour photos of Russia in 1909, well before most of the world had access to true colour photography[3].



A Single Camera with Multiple Sensors and Dichroic Mirrors

A dichroic mirror is an optical element which reflects one spectral band while allowing all others to pass through. Early colour video cameras for television used a design similar to this one:

Colour Imaging Using Dichroic Mirrors
[   ]
[   ]

The basic design in the first image was used in early television cameras, before the invention of the Bayer-pattern filter. Dichroic mirrors (optical elements which reflect one spectral band but allow others to pass through) are used to separate the desired bands and direct them to individual (greyscale) sensors. This same concept could theoretically allow for multispectral imaging (as depicted in the second image) by adding additional mirror/sensor pairs. For example, one for near infrared and another for ultraviolet-A light, for a total of five sensors and four dichroic mirrors.


This same general approach could be used to create a multispectral camera which was sensitive to five or more bands (near infrared, red, green, blue, ultraviolet-A, and possibly ultraviolet-B if a quartz lens were used). A very fancy variation might place the sensors on a rail/track-assembly with computer control of their position. This would allow compensation for the different effective focal length for the various bands when different lenses were used (see Lenses). Each sensor would have an independent autofocus system. One sensor (maybe green, or alternately whichever sensor was currently obtaining an image with the highest contrast or some other criteria) would be fixed to the same focal plane as the viewfinder, and the others adjust themselves to have the same plane in focus. Even the lowest-quality lens could theoretically produce images free from chromatic aberration.



A Single Camera with a Multispectral Sensor

This is simply a variation on the traditional colour camera. Two main approaches come to mind - a more-complex Bayer-style filter, or a design based on the Foveon sensor.

The Foveon design replaces the traditional 2-dimensional Bayer grid with a thin 3-dimensional sensor. Each pixel has sensitivity by depth, with shorter wavelengths (e.g. blue) being captured at shallow depths and longer wavelengths (e.g. red) further into the sensor. This has advantages even in the realm of conventional photography; the Bayer filter only produces full-resolution colour images by interpolation (that is, each square group of four pixels is reconstructed from two green, one red, and one blue pixel through a process called "de-mosaicing").

A Bayer-style filter with more than one additional spectral band would quickly reduce the effective resolution of the camera, so I don't believe it's a good approach. However, see Virtual Filters for a discussion of Kodak's design that replaces one green element with a broadband element to greatly increase the overall light-sensitivity of the camera with promising results.



Hybrid Designs/Approaches

Obviously the potential exists to combine aspects of several of these systems. For example, if I ever manage to get my hands on a thermal imager, I would be using a combination of the "Broad-Band Camera with Filters in Front of the Lens" and "Multiple, Synchronized Cameras" (minus the synchronization) approaches.

A Few Thoughts on Potential Multispectral Cameras

I would love to see a DSLR that offered as many varieties of histogram as they have metering today (matrix, spot, etc.). It would be great to be able to take a picture of a bright object against a dark background, and then assess only the histogram of that bright object, for example.

Aside from design issues, virtually all image-processing/editing software is based around greyscale or three-channel (RGB) images[4]. This seems unlikely to change for the forseeable future. If a manufacturer were to produce a true multispectral digital camera (that is, one which captures not only human-visible light, but additional bands simultaneously), I would think for "legacy compatibility" purposes the easiest approach would be to output one RAW file for the RGB data, plus one additional greyscale RAW file for each added spectral band (or alternately, greyscale files of each band, including the red, green, and blue components. Or both, etc.). This would allow existing toolchains to be retained, with individual elements of those toolchains being replaced on an as-needed basis.

There are a few corporations which produce highly specialized scientific/military equipment that can capture multispectral (or hyperspectral) images in a single shot, and typically include custom software for dealing with the resulting data. These are so far beyond the world that I operate in that I can't really comment on them.

1. Bjørn Rørslett performed a test of the UV response in several Canon models and found them severely lacking (near the bottom of the page). In fairness to Canon, this is probably a good thing as far as most of their customers are concerned, because they don't want anything other than human-visible light to show up in their photos.
2. Optical quartz (or possibly fluorite) are also options, but because the index of refraction is different for those materials, a different thickness would be required than when substituting regular glass.
3. Frank Dellaert created an automated method to reconstruct the colour versions of the remainder of the Prokudin-Gorskii photos, beyond the basic collection which were breathtakingly processed by the Library of Congress.
4. YUV, CMYK, YCrCb, lab colour, etc., are really just variations on the RGB model, because they all represent what the human eye would see. They don't inherently incorporate the possibility of additional spectral bands.
5. Note: according to several other multispectral photographers, Live View does not work particularly well for UV on Nikon's current camera bodies, although I am hopeful that an extremely light-sensitive model like the D3S would behave differently. In addition, to my dismay, I discovered that most of the companies which perform multispectral conversions remove the sensor-cleaning apparatus as part of the conversion (because it contains a second IR-blocking filter). A few of them will convert that component as well, but will charge considerably more.
6. For the expert reader, the sadness of this fact is calculated using the following formula, in which ":(" represents the SI "woe and dismay" unit, "TG" represents the average number of teeth gnashed per second, and "BT" represents the number of bitter tears wept during that same period (both out of a random population sample of size > 10,000):
:( = (((TG + BT) / 2) * 47000000)^3
[ Page Icon ]