[ Beneath the Waves ]

Calculated Greyscale Images

article by Ben Lincoln

 

With the raw data of the image cube as the starting point, it becomes possible to mathematically derive additional greyscale images which illuminate the relationships between them.

Mathematical Relationships

Because our eyes perceive (mostly) the intensity of three spectral bands, it seems natural to begin by treating other spectral bands the same way. But it can also be useful to generate images which instead represent, for example, the difference in intensity between two (or more) bands, or the ratio of one to another.

In this group of images, notice that several of the difference variations make it extremely obvious where there is variation across spectral bands near the centers of the flowers:

Differences
[ NIR (Grey) ]
NIR (Grey)
[ R (Grey) ]
R (Grey)
[ G (Grey) ]
G (Grey)
[ B (Grey) ]
B (Grey)
[ UVA (Grey) ]
UVA (Grey)
[ Difference: B-G (Grey) ]
Difference: B-G (Grey)
[ Difference: G-NIR (Grey) ]
Difference: G-NIR (Grey)
[ Difference: NIR-B (Grey) ]
Difference: NIR-B (Grey)
[ Difference: NIR-R (Grey) ]
Difference: NIR-R (Grey)
[ Difference: R-B (Grey) ]
Difference: R-B (Grey)
[ Difference: R-UVA (Grey) ]
Difference: R-UVA (Grey)
       

 

Date Shot: 2010-10-16
Camera Body: Nikon D70 (Modified)
Lens: Nikon Micro-Nikkor 105mm f/4
Filters: Standard Set
Date Processed: 2010-12-19
Version: 1.0

 

Calculating the ratio instead of the difference can produce a similar, but much more pronounced effect:

Ratios
[ Ratio: NIR To B (Grey) ]
Ratio: NIR To B (Grey)
[ Ratio: NIR To R (Grey) ]
Ratio: NIR To R (Grey)
[ Ratio: R To UVA (Grey) ]
Ratio: R To UVA (Grey)
[ Ratio: G To B (Grey) ]
Ratio: G To B (Grey)
[ Ratio: B To NIR (Grey) ]
Ratio: B To NIR (Grey)

 

Date Shot: 2010-10-16
Camera Body: Nikon D70 (Modified)
Lens: Nikon Micro-Nikkor 105mm f/4
Filters: Standard Set
Date Processed: 2010-12-19
Version: 1.0

 

Whichever of the two is preferable depends greatly on the subject matter and intent of the person processing the imagery.

A technique commonly used in the satellite mapping/remote-sensing community is to calculate the "normalized difference" between two bands[1]. The basic formula is in the form ND(a, b) = (a-b)/(a+b). For example, the normalized difference of 7 and 19 is (7 - 19)/(7 + 19), which is equal to -6/13, or about -0.46. In most instances that I've seen, zero is treated as a threshold, with negative values being discarded. If you are using The Mirror's Surface Breaks, it can generate both positive and negative values. When dealing with greyscale images, it is easiest to create a separate image for negative versus positive, because there is no intuitively visible way to indicate to the viewer where exactly zero lies in a greyscale image. There is a solution to this dilemma, which is discussed in the article Gradient-Mapped False Colour Images.

Here are a few examples of positive and negative normalized differences in OnEarth imagery of Grand Canyon National Park.

Normalized Differences
[ ND (Pos): SRTM-IR1 (Grey) ]
ND (Pos): SRTM-IR1 (Grey)
[ ND (Neg): SRTM-IR1 (Grey) ]
ND (Neg): SRTM-IR1 (Grey)
[ ND (Pos): IR3-Green (Grey) ]
ND (Pos): IR3-Green (Grey)
[ ND (Neg): IR3-Green (Grey) ]
ND (Neg): IR3-Green (Grey)
[ ND (Pos): IR2-Blue (Grey) ]
ND (Pos): IR2-Blue (Grey)
[ ND (Neg): IR2-Blue (Grey) ]
ND (Neg): IR2-Blue (Grey)
[ ND (Pos): IR1-Blue (Grey) ]
ND (Pos): IR1-Blue (Grey)
[ ND (Neg): IR1-Blue (Grey) ]
ND (Neg): IR1-Blue (Grey)
[ ND (Pos): Red-Green (Grey) ]
ND (Pos): Red-Green (Grey)
[ ND (Neg): Red-Green (Grey) ]
ND (Neg): Red-Green (Grey)
   

 

 

You can see that the negative values are generally washed out and not particularly useful, but some of them (such as the IR1/Blue variation) do provide some high-contrast views of the course of the river.

Vegetation Indices

The normalized difference formula is the basis for a whole branch of the satellite imagery field in which the amount/health of local vegetation (or other aspects of the terrain) is calculated from orbit. The original method is called the "normalized difference vegetation index" (NDVI), and is calculated simply by using the near-infrared image as the "a" variable and the red image as the "b" variable in the normalized different formula (so that NDVI = (NIR-Red)/(NIR+Red)). There are quite a few variations on this theme that have been developed over the years. They range from simple modifications (such as using the green image instead of red, yielding the "green normalized difference vegetation index" (GNDVI)), to the complex (for example, the "Global Environmental Monitoring Index" (GEMI) and "Angular Vegetation Index" (AVI)).

A lengthy list of these indices and their formulae is located at the end of this article. Here are a few examples based on OnEarth imagery of Yellowstone National Park:

Vegetation Index Examples 1
[ NDVI (Positive) (Grey) ]
NDVI (Positive) (Grey)
[ GNDVI (Positive) (Grey) ]
GNDVI (Positive) (Grey)
[ GEMI (Positive) (Grey) ]
GEMI (Positive) (Grey)
[ ASVI (Positive) (Grey) ]
ASVI (Positive) (Grey)
[ MSAVI (Positive) (Grey) ]
MSAVI (Positive) (Grey)
 

 

 

The same basic concept is also used to help detect other characteristics of the environment, such as distinguishing between water and land, or determining approximately how much sediment is present in a body of water.

This type of calculation can also be performed against multispectral imagery obtained with a camera on Earth, but it is very important to keep in mind that the technique was developed for satellite data, and the results may be wrong or misleading when applied to images captured in another way. For example, satellite imagery typically does not have to take into account shadows being cast on objects, or clouds moving in-between exposures. Satellites capture images of very wide areas, as opposed to individual plants. Some of the indices even depend on factors like the angle between the source of light and the sensor. Did you remember to measure that when you were out taking pictures?

That having been said, like so many other things, sometimes the results are quite striking, and as long as a pretty picture is the end goal (as opposed to obtaining reliable, scientific data), there's no reason not to do it. Here are a few examples from Big Bend National Park:

Vegetation Index Examples 2
[ NDVI (Negative) (Grey) ]
NDVI (Negative) (Grey)
[ NDVI (Positive) (Grey) ]
NDVI (Positive) (Grey)
[ ASVI (Negative) (Grey) ]
ASVI (Negative) (Grey)
[ MSAVI (Negative) (Grey) ]
MSAVI (Negative) (Grey)
 

 

Date Shot: 2010-07-07
Camera Body: Nikon D70 (Modified)
Lens: Nikon Nikkor 24mm f/2.8(?)
Filters: Standard Set
Date Processed: 2010-12-09
Version: 1.0

 

...and a phalaenopsis orchid:

Vegetation Index Examples 3
[ ARVI (Negative) (Grey) ]
ARVI (Negative) (Grey)
[ ARVI (Positive) (Grey) ]
ARVI (Positive) (Grey)
[ GNDVI (Negative) (Grey) ]
GNDVI (Negative) (Grey)
[ GNDVI (Positive) (Grey) ]
GNDVI (Positive) (Grey)
[ MSAVI2 (Grey) ]
MSAVI2 (Grey)

 

Date Shot: 2010-10-16
Camera Body: Nikon D70 (Modified)
Lens: Nikon Micro-Nikkor 105mm f/4
Filters: Standard Set
Date Processed: 2010-12-16
Version: 1.0

 

Vegetation Index Technical Background

The Mirror's Surface Breaks includes configuration files with numerous vegetation index and other satellite imagery-type calculated variations defined. For those who are interested, here is a list of the various functions and where I obtained the formula for each:

Abbreviations:

From Landsat Image Processing, James S. Aber, 2009:

From Above-Ground Biomass Estimation of Successional and Mature Forests Using TM Images in the Amazon Basin, Dengsheng Lu, Paul Mausel, Eduardo Brondizio, and Emilio Moran, 2002:

The revised versions of the "Tasseled Cap" KT1 ("Brightness"), KT2 ("Greenness"), KT3 ("Wetness"), KT4[3], KT5, and KT6 formulae were aggregated from ARSC Landsat Vegetation Indices (citing numerous papers), Tasseled Cap Transformation: Mississippi Coastal Corridor July 24, 2000, Dr. Roger King, Dr. Charles O'Hara, 2001, The Tasseled Cap Transformation in Remote Sensing, Thayer Watkins, date unknown, citing "A Physically-Based Transformation of Thematic Mapper Data - The TM Tasseled Cap", Eric P. Crist and Richard C. Cicone, IEEE Transactions on Geoscience and Remote Sensing, Vol. 22, No. 3, May 1984, pp. 256-263 [ Important: see footnote 4[4] below! ], and are calculated using the following matrices. That is, if the data is in a given format, each "component" greyscale image ("brightness", "greenness", "wetness", and so on) is calculated by multiplying each of the "thematic mapper" channels (except for TM6[5]) by a particular coefficient[6].

LandSat 4 DN ("Digital Number") Data

Matrix

  TM1 TM2 TM3 TM4 TM5 TM7
KT1 (Brightness) 0.3037 0.2793 0.4743 0.5585 0.5082 0.1863
KT2 (Greenness) -0.2848 -0.2435 -0.5436 0.7243 0.084 -0.18
KT3 (Wetness) 0.1509 0.1973 0.3279 0.3406 -0.7112 -0.4572
KT4 (Fourth Component) 0.8832 -0.0819 -0.458 -0.0032 -0.0563 0.013
KT5 (Fifth Component) 0.0573 -0.026 0.0335 -0.1943 0.4766 -0.8545
KT6 (Sixth Component) 0.1238 -0.9038 0.4041 0.0573 -0.0261 0.024

Formulae

KT1 = (TM1 * 0.3037) + (TM2 * 0.2793) + (TM3 * 0.4743) + (TM4 * 0.5585) + (TM5 * 0.5082) + (TM7 * 0.1863)
KT2 = (TM1 * -0.2848) + (TM2 * -0.2435) + (TM3 * -0.5436) + (TM4 * 0.7243) + (TM5 * 0.084) + (TM7 * -0.18)
KT3 = (TM1 * 0.1509) + (TM2 * 0.1973) + (TM3 * 0.3279) + (TM4 * 0.3406) + (TM5 * -0.7112) + (TM7 * -0.4572)
KT4 = (TM1 * 0.8832) + (TM2 * -0.0819) + (TM3 * -0.458) + (TM4 * -0.0032) + (TM5 * -0.0563) + (TM7 * 0.013)
KT5 = (TM1 * 0.0573) + (TM2 * -0.026) + (TM3 * 0.0335) + (TM4 * -0.1943) + (TM5 * 0.4766) + (TM7 * -0.8545)
KT6 = (TM1 * 0.1238) + (TM2 * -0.9038) + (TM3 * 0.4041) + (TM4 * 0.0573) + (TM5 * -0.0261) + (TM7 * 0.024)

LandSat 4 Reflectance Data

Matrix

  TM1 TM2 TM3 TM4 TM5 TM7
KT1 (Brightness) 0.2043 0.4158 0.5524 0.5741 0.3124 0.2303
KT2 (Greenness) -0.1063 -0.2819 -0.4934 0.794 -0.0002 -0.1446
KT3 (Wetness) 0.0315 0.2021 0.3102 0.1594 -0.6806 -0.6109
KT4 (Fourth Component) -0.2117 -0.0284 0.1302 -0.1007 0.6529 -0.7078
KT5 (Fifth Component) -0.8669 -0.1835 0.3856 0.0408 -0.1132 0.2272
KT6 (Sixth Component) 0.3677 0.82 0.4354 0.0518 -0.0066 -0.0104

Formulae

KT1 = (TM1 * 0.2043) + (TM2 * 0.4158) + (TM3 * 0.5524) + (TM4 * 0.5741) + (TM5 * 0.3124) + (TM7 * 0.2303)
KT2 = (TM1 * -0.1063) + (TM2 * -0.2819) + (TM3 * -0.4934) + (TM4 * 0.794) + (TM5 * -0.0002) + (TM7 * -0.1446)
KT3 = (TM1 * 0.0315) + (TM2 * 0.2021) + (TM3 * 0.3102) + (TM4 * 0.1594) + (TM5 * -0.6806) + (TM7 * -0.6109)
KT4 = (TM1 * -0.2117) + (TM2 * -0.0284) + (TM3 * 0.1302) + (TM4 * -0.1007) + (TM5 * 0.6529) + (TM7 * -0.7078)
KT5 = (TM1 * -0.8669) + (TM2 * -0.1835) + (TM3 * 0.3856) + (TM4 * 0.0408) + (TM5 * -0.1132) + (TM7 * 0.2272)
KT6 = (TM1 * 0.3677) + (TM2 * 0.82) + (TM3 * 0.4354) + (TM4 * 0.0518) + (TM5 * -0.0066) + (TM7 * -0.0104)

LandSat 5 DN ("Digital Number") Data

Matrix

  TM1 TM2 TM3 TM4 TM5 TM7
KT1 (Brightness) 0.2909 0.2493 0.4806 0.5568 0.4438 0.1706
KT2 (Greenness) -0.2728 -0.2174 -0.5508 0.722 0.0733 -0.1648
KT3 (Wetness) 0.1446 0.1761 0.3322 0.3396 -0.621 -0.4186
KT4 (Fourth Component) 0.8461 -0.0731 -0.464 -0.0032 -0.0492 0.0119
KT5 (Fifth Component) 0.0549 -0.0232 0.0339 -0.1937 0.4162 -0.7823
KT6 (Sixth Component) 0.1186 -0.8069 0.4094 0.0571 -0.0228 0.22

Formulae

KT1 = (TM1 * 0.2909) + (TM2 * 0.2493) + (TM3 * 0.4806) + (TM4 * 0.5568) + (TM5 * 0.4438) + (TM7 * 0.1706)
KT2 = (TM1 * -0.2728) + (TM2 * -0.2174) + (TM3 * -0.5508) + (TM4 * 0.722) + (TM5 * 0.0733) + (TM7 * -0.1648)
KT3 = (TM1 * 0.1446) + (TM2 * 0.1761) + (TM3 * 0.3322) + (TM4 * 0.3396) + (TM5 * -0.621) + (TM7 * -0.4186)
KT4 = (TM1 * 0.8461) + (TM2 * -0.0731) + (TM3 * -0.464) + (TM4 * -0.0032) + (TM5 * -0.0492) + (TM7 * 0.0119)
KT5 = (TM1 * 0.0549) + (TM2 * -0.0232) + (TM3 * 0.0339) + (TM4 * -0.1937) + (TM5 * 0.4162) + (TM7 * -0.7823)
KT6 = (TM1 * 0.1186) + (TM2 * -0.8069) + (TM3 * 0.4094) + (TM4 * 0.0571) + (TM5 * -0.0228) + (TM7 * 0.22)

LandSat 7 Reflectance Data

Matrix

  TM1 TM2 TM3 TM4 TM5 TM7
KT1 (Brightness) 0.3561 0.3972 0.3904 0.6966 0.2286 0.1596
KT2 (Greenness) -0.3344 -0.3544 -0.4556 0.6966 -0.0242 -0.263
KT3 (Wetness) 0.2626 0.2141 0.0926 0.0656 -0.7629 -0.5388
KT4 (Fourth Component) 0.0805 -0.0498 0.195 -0.1327 0.5752 -0.7775
KT5 (Fifth Component) -0.7252 -0.0202 0.6683 0.0631 -0.1494 -0.0274
KT6 (Sixth Component) 0.4 -0.8172 0.3832 0.0602 -0.1095 0.0985

Formulae

KT1 = (TM1 * 0.3561) + (TM2 * 0.3972) + (TM3 * 0.3904) + (TM4 * 0.6966) + (TM5 * 0.2286) + (TM7 * 0.1596)
KT2 = (TM1 * -0.3344) + (TM2 * -0.3544) + (TM3 * -0.4556) + (TM4 * 0.6966) + (TM5 * -0.0242) + (TM7 * -0.263)
KT3 = (TM1 * 0.2626) + (TM2 * 0.2141) + (TM3 * 0.0926) + (TM4 * 0.0656) + (TM5 * -0.7629) + (TM7 * -0.5388)
KT4 = (TM1 * 0.0805) + (TM2 * -0.0498) + (TM3 * 0.195) + (TM4 * -0.1327) + (TM5 * 0.5752) + (TM7 * -0.7775)
KT5 = (TM1 * -0.7252) + (TM2 * -0.0202) + (TM3 * 0.6683) + (TM4 * 0.0631) + (TM5 * -0.1494) + (TM7 * -0.0274)
KT6 = (TM1 * 0.4) + (TM2 * -0.8172) + (TM3 * 0.3832) + (TM4 * 0.0602) + (TM5 * -0.1095) + (TM7 * 0.0985)

From Vegetation Detection for Mobile Robot Navigation, David M. Bradley, Scott M. Thayer, Anthony Stentz, and Peter Rander, 2004:

From Tropical Wet Realms Of Central Africa, Part II, Paul R. Baumann, 2009:

From Bilko Mini-Lesson 8: Vegetation Indices:

From Band Combinations, James W. Quinn, 2001:

Nonstandard formulae that I added myself for experimental purposes:

 
Footnotes
1. Just to be confusing, the "normalized difference" is often (but not always) different from the values that will result if the difference is calculated, and then those values are normalized in the more common sense.
2. For these formulas, no formal name was given, so I based the names off of the same category of formula in one of the other cited sources.
3. I have seen one source refer to KT4 as "haze", but others do not, and it doesn't seem to be a very accurate indicator of haze, at least with the imagery I've run through it.
4. When I originally began researching this content, it was surprisingly difficult to find information online about what exactly the "KT"-series calculations were actually supposed to represent, although the paper I was using for reference (Above-Ground Biomass Estimation of Successional and Mature Forests Using TM Images in the Amazon Basin) referred to them as "Tasseled Cap", which I found quite mysterious. It actually refers to the shape of the graph in the 1976 paper by R. J. Kauth and G. S. Thomas which was the basis for this entire range of calculations (see The Tasseled Cap Transformation in Remote Sensing for a scan of the graph, or Tasseled Cap Transformation for a comparison between the graph and an actual tasseled cap).
5. TM6, as indicated in the list near the beginning of the article, is the thermal imager. It seems to be left out fairly frequently in terms of use in calculated imagery - maybe because it is lower-resolution than the other bands?
6. I assume that the difference regarding LandSat generation (4, 5, or 7) is due to differences in sensitivity between the bands on each generation of satellite. However, in my mind, this only explains the "DN" ("digital number") values, not the reflectance values. I am puzzled as to why the coefficients are different for LandSat 4 and LandSat 7 reflectance data, as my understanding of that term means - given that the three generations of LandSat sensor have the same spectral ranges - the coefficients should be identical. Except they're not.
 
[ Page Icon ]