[ Beneath the Waves ]

Aligning Multiple Exposures in Hugin

article by Ben Lincoln

 

Once you have a set of multispectral exposures (probably using the method from Taking Multispectral Pictures, you can proceed to process them on a computer. The first step is to align them, and the method I use is based around Hugin, which is a front-end for the Panorama Tools command-line utilities. There is a longer explanation of Hugin in the Other Tools article. These instructions assume the use of Hugin 0.7.0, because at the time of this writing that was the most recent version for which a precompiled Windows version was available, and I don't expect most photographers (even among the subset who are unusual enough to want to use this process) to be interested in compiling their own binaries. Hugin is also available for Linux and MacOS, and I imagine these instructions are more or less valid for those versions.

The high-level reason for this work is twofold: first, to correct for any jostling of the tripod in-between your exposures. Second, to correct for the slightly different effective focal lengths that the exposures will have due to the longer (in the case of near infrared) and/or shorter (in the case of ultraviolet-A) wavelengths of light. If you are not a human, but an advanced robot or cyborg with motion-control software that guarantees the exact same shot for multiple exposures, and you are using an expensive multispectral lens whose design renders the focal lengths identical, you can skip this lengthy process altogether.

Remember that Hugin's intended purpose is for panorama stitching, not aligning multispectral exposures, so some additional steps are required for this repurposing.

RAW Conversion and Pre-Processing

Before Hugin even comes into play, it's necessary to turn the RAW files from your camera into something it can use. There are two variations on this section depending on whether or not your image-editing software supports 16-bit-per-channel images[1]. If they are supported, you should most definitely use them because they provide an enormously greater dynamic range and fidelity than older 8-bit-per-channel formats. In the case of these steps specifically, much of the work can be performed using non-destructive editing (effects layers, etc.).

If you are working with Adobe Photoshop®, I strongly recommend changing the greyscale working space from the default, because the default that Photoshop® uses is stupid and confusing and steals lollipops from small children just to make them cry. To do this (on Windows, at least), while in Photoshop®'s main window, press CTRL-Shift-K on your keyboard to open the Colour Settings window. In the Working Spaces section, there is a drop-down for Gray:. Select "Gray Gamma 2.2", then click OK.

If your image-editing software supports 16-bit-per-channel images, use these steps:

  1. Open the RAW files in the RAW converter.
  2. If you used a grey card while shooting, use the grey card shots to automatically set the white balance of the real exposures. If performed correctly, this should cause any near infrared exposures to appear as greyscale ("black-and-white"), and reduce the purple tint of any ultraviolet-A shots. If you did not use a grey card, adjust the white balance manually to the best of your ability. For near infrared and ultraviolet-A exposures, it is generally possible to approximate a grey card by clicking on an area of the image of medium brightness. For the human-visible-light exposure, if there are any true grey objects in the shot (a concrete sidewalk, etc.), clicking on them with the white balance tool will serve the same purpose.
  3. Configure the RAW conversion software to convert the near infrared and ultraviolet-A exposures to greyscale. This will remove any lingering tinting artifacts, and in better software will perform an initial optimization of the dynamic range.
  4. Disable any other processing. For example, Adobe RAW defaults to boosting brightness, contrast, etc. Make sure all of these options are set to the equivalent of 0 (no change from the RAW file).
  5. Make sure the output format is 16-bit-per-channel.
  6. If the RAW converter is integrated with the image editor (for example, Photoshop/Adobe RAW), open the files. If you are using a separate RAW converter, save the images as 16-bit-per-channel TIFFs or another lossless format that supports 16-bit-per-channel files (PNG, etc.), then open those files in your image-editing software.
  7. In the image-editing software, create a Levels effects layer for each of the exposures and use it to normalize the dynamic range by adjusting the left and/or right sliders to "trim off" the section of histogram that doesn't contain any data. This step is performed using an effects layer so that if you change your mind later, you can come back and adjust the levels in a different way.
  8. Save all of the exposures as separate files in the native format of the image-editing software.

If your image-editing software does not support 16-bit-per-channel images, use these steps:

  1. Open the RAW files in the RAW converter.
  2. If you used a grey card while shooting, use the grey card shots to automatically set the white balance of the real exposures. If performed correctly, this should cause any near infrared exposures to appear as greyscale ("black-and-white"), and reduce the purple tint of any ultraviolet-A shots. If you did not use a grey card, adjust the white balance manually to the best of your ability. For near infrared and ultraviolet-A exposures, it is generally possible to approximate a grey card by clicking on an area of the image of medium brightness. For the human-visible-light exposure, if there are any true grey objects in the shot (a concrete sidewalk, etc.), clicking on them with the white balance tool will serve the same purpose.
  3. Configure the RAW conversion software to convert the near infrared and ultraviolet-A exposures to greyscale. This will remove any lingering tinting artifacts, and in better software will perform an initial optimization of the dynamic range.
  4. Use the RAW converter's processing options to normalize the dynamic range of all of the exposures (for example, by adjusting the response curves). The histogram of the image should generally extend from the left side to the right, but not be "cut off" at either end. Near infrared and ultraviolet-A shots will often start out with a histogram that appears as a narrow spike in the middle of the graph; the goal here is to spread that spike out into a nice curve. Performing this step in the RAW converter is necessary for this variation because after downconversion to 8-bit-per-channel, over 90% of the image fidelity (depending on the specific RAW format) is permanently lost.[2]
  5. If the RAW converter is integrated with the image editor (for example, Photoshop/Adobe RAW), open the files. If you are using a separate RAW converter, save the images as 8-bit-per-channel TIFFs or another lossless format (PNG, etc.), then open those files in your image-editing software.
  6. Save all of the exposures as separate files in the native format of the image-editing software.
White Balance of Non-Human-Visible Exposures
[ Uncorrected ]
Uncorrected
[ Corrected ]
Corrected
     

The first of these images shows one of the ultraviolet-A exposures from "Stovetop Bokeh" (see Thermal versus Near Infrared) as it appeared uncorrected in RAW format. The purple tint is an artifact introduced by the white balance settings. After adjusting the white balance as much as is possible for UVA exposures (which - unlike near infrared - cannot be fully compensated for by white balance alone) and selecting the Convert to Grayscale option in the RAW converter, the exposure appears in more accurate form as depicted in the second image.

 
Normalizing Dynamic Range Using Histograms and Levels
[ An image with a poor histogram ]
An image with a poor histogram
[ An image with a better histogram ]
An image with a better histogram
[ How to turn the former into the latter using levels ]
How to turn the former into the latter using levels
   

The two versions of this image differ only in the the second has had a Levels adjustment layer applied in Photoshop® in order to normalize its dynamic range. In the original version, you can see that most of the histogram is empty, with a narrow spike in the middle. By "cropping" the dynamic range and expanding what's left to fill the entire histogram (which is what this use of Levels does), the clarity is greatly improved. This type of processing is especially critical for exposures that capture non-human-visible light. Obviously you should endeavour to shoot the photos in a way that results in as well-balanced a histogram as possible, but this software enhancement is usually necessary even with a very well-composed shot when using lenses not specifically designed for multispectral use.

 

Image Cleanup

Now that the images are open in the editing software, use whichever means you are comfortable/familiar with to remove any dust or other minor defects. If there are any near infrared or ultraviolet hotspots, do not attempt to correct them at this stage, because doing so convincingly requires working with a composite image in a later step.

While checking for dust, I recommend using a very close eye, because dust spots that are barely noticeably in the RGB rendition of an image may become glaringly obvious in a false colour composite. For example, if a shot contains a blue sky, switch to a view of just the red colour channel and you are likely to notice a number of dark dust spots that are next to invisible otherwise. If that uncorrected red channel is shifted to the green channel (for example, for an NIR-R-G variation), those dust specks will stand out like a sore thumb. It can also help to temporarily add a brightness/contrast adjustment layer to make any defects easier to find.

Hidden Dust Specks
[ R-G-B ]
R-G-B
[ Red channel only ]
Red channel only
     

This human-visible image looks fine normally, but when the red channel is isolated, several dust specks (highlighted) are easily visible. If they are not removed, they will become glaringly obvious when the red channel is shifted into the green channel for false colour compositing.

 

Once you are satisfied that the images are good enough and have saved them (again, in the native format of the image-editing software in order to preserve any adjustment layers, etc.), save copies as TIFFs (TIFF is the best format that Hugin supports). For the near infrared and/or ultraviolet-A exposures, be sure to convert them to RGB colour before saving them as TIFFs. This additional step is necessary because while the Hugin front-end supports greyscale TIFFs, some of its underlying components (Autopano-SIFT, etc.) only support RGB formats.

Aligning the Images in Hugin

The remainder of the alignment process takes place in Hugin. These instructions have been updated for the 2010.4.0 release of that utility.

Why Registration and Effective-Focal-Length Compensation is Important
[ Comparison ]
Comparison
       

Here's a comparison between two versions of the NIR-R-G false colour variation of a photo that I took in Bodie State Historic Park. On the left is the result of working directly with the original exposures. On the right is the result after the exposures were processed through Hugin to correct the registration and compensate for the different effective focal length of the near infrared shot.

 
  1. Launch Hugin.
  2. Select the Images tab.
  3. Click the Add individual images... button.
  4. Navigate to the location where you stored the final TIFFs in the Image Cleanup section, above. Select all of them, then click Open.
  5. In the Camera and Lens data dialogue, enter the focal length of the lens you used in the Focal length: field, and the crop factor of your camera's sensor in the Focal length multiplier: field. This will automatically populate the HFOV (v): field. If you don't know the crop factor for your camera's sensor, you will need to look it up online. For example, Nikon "DX" bodies (the D70, D80, etc.) have sensors with crop factors of 1.5, and "FX" bodies (The D3, D700, etc.) have a crop factor of 1, whereas Sigma's SD-14 and SD-15 have a crop factor of 1.7, but the SD-1 has a crop factor of 1.5 like Nikon DX bodies. For single-frame shots, this information is not critical, but it is very important to get it right if you are constructing a multi-shot panorama (discussed below). If your TIFFs contain EXIF data for the lens focal length and crop factor, this dialogue will not be displayed.
  6. Verify that the Lens type: drop-down is set to Normal (rectilinear) and click OK. Again, if your TIFFs contain EXIF data for the lens focal length and crop factor, this dialogue will not be displayed.
  7. In the list of images, select the human-visible-light version. Click the Anchor this image for position and Anchor this image for exposure buttons near the bottom of the main Hugin window.
  8. If you have not already done so previously, configure the automatic control point generation options:
    1. Go to the File menu and choose Preferences.
    2. Select the Control Points Editor tab.
    3. Enter these values, which are the ones I've found most useful so far: Patch width: 21, Search area width: 5, Local search area width: 14, Correlation Threshold: 0.7, Peak Curvature Threshold: 0, Enable rotation search: checked, Start angle: -10, Stop angle: 10, Steps: 12.
    4. Select the Control Point Detectors tab (this was labeled Autopano in earlier versions of Hugin).
    5. In the list of programs, choose Autopano-SIFT-C, then click the Edit button.
    6. In the Parameters window which opens, enter this text into the Arguments: field: --maxmatches 96 --projection %f,%v %o %i and click OK.
    7. In the list of programs, choose Align image stack, then click the Edit button.
    8. In the Parameters window which opens, enter this text into the Arguments: field: -f %v -p %o %i and click OK.
    9. In the main Preferences window, switch to the Stitching tab. Uncheck Create cropped images by default, then click OK.
    You should only need to perform this step once - Hugin will retain the settings unless you change them again[3].
  9. In the Images tab of the main window, select the human-visible-light image and one of the others (not both at once!, unless you have a fairly fast PC).
  10. In the lower-left corner of this tab is a section labeled Feature Matching (Autopano). Make sure that the Settings: drop-down is set to Autopano-SIFT-C. The value in the Points per Overlap: box doesn't matter, because the settings configured above override it to always be 96.
  11. Click the Create control points button. A new window should appear. Depending on the specifics of your images and the speed of your computer, it can take less than a minute to over an hour for this step to finish. If you immediately receive a non-specific error, verify that you saved all of the TIFFs as RGB and not greyscale. Autopano-SIFT will fail instantly if one of the images is greyscale. If all of the images are already RGB, a failure usually indicates that you need to manually specify the control points, which is discussed below. You can also try temporarily modifying the near infrared and/or ultraviolet-A images to make them easier for Autopano-SIFT to detect matches in with the human-visible-light shot (also discussed below).
  12. In the Feature Matching (Autopano) section of the window, chane the Settings: drop-down to Align image stack, then click the Create control points button again.
  13. If you have multiple non-human-visible exposures, repeat the previous three steps for each of them (performing a match between them and the human-visible shot).
  14. Select the Camera and Lens tab. If there is a near infrared exposure listed, select it. Near the bottom of the window, uncheck all of the Link boxes, then click the New Lens button.
  15. Repeat the previous step for any other non-human-visible exposures (ultraviolet-A, etc.), although the Link boxes will already be unchecked. It is critical that you assign each exposure a separate lens, because the different spectral bands mean that they effectively have a slightly longer or shorter focal length (usually about 0.1mm) than the human-visible version. If you skip this step, the registration of the near infrared and/or ultraviolet-A shots will become more and more inaccurate the further away from the center of the image.
  16. Select the Control Points tab.
  17. In the left half of the window, use the drop-down to select the human-visible exposure. In the right half of the window, select one of the other exposures.
  18. If there are already a large number of control points displayed, and they appear more or less accurate, you can skip to the next step. Otherwise you will need to manually specify control points. Remember that the more accurate control points are present, the better the final result, so I try to end up with at least 10-20 (and preferably over 50) between each image. Control points are specified by clicking on one of the two images. If Hugin already has at least one control point between the pair, it will attempt to automatically position the point in the other image. Otherwise, you will need to click in roughly the same place in the other image. Verify that the two control points represent the same location in both images - not just sort of the same location, but the exact same location. Choose high-contrast, fixed points for this - geographic features, buildings, tree trunks, etc. are good choices. Leaves which are blowing in the wind, shadows, and clouds are poor choices. Repeat this process until you believe you have a sufficient number of control points between the images.
  19. If there are more than two exposures in your set (for example, human-visible, near infrared, and ultraviolet-A), leave the human-visible shot selected in the left half of the window, select one of the additional exposures, and repeat the previous step. Note that while you can attempt matches between e.g. the near infrared and ultraviolet-A shots, you may or may not find the results worthwhile. I tend to not perform matches between them[4].
  20. Click the Optimiser tab. In the Optimise drop-down, select Positions and View. Then click the drop-down again and select the Custom parameters below. This will cause most of the correct custom options to be selected, with only a minor correction necessary: in the Lens Parameters section, uncheck the 0 checkbox in the view (v) subsection. Leave all of the other checkboxes alone[5]. Experienced Hugin users will realize that the goal here is to use one of the images (in this case, the human-visible exposure) as an unchanging reference for both image orientation and field of view. Leaving all three view (v) boxes checked will usually cause bizarre results due to the lack of a solid reference point.
  21. Click the Optimise now! button. After a few seconds to a few minutes (depending on the number of images and control points), a results dialogue will be displayed. You can make a note of the values here, or just click OK.
  22. Open the Control Point Table (either by clicking the button in the toolbar which is second from the right, by pressing the F3 key on your keyboard, or by selecting Control Point Table from the View menu.
  23. In the Control Points window, click on the Distance column header to sort by that column. Scroll to the beginning or end, depending on which of them contains the largest numbers.
  24. Highlight any entries in which the Distance is greater than or equal to 1.00 and delete them. These control points are spurious and will cause alignment problems. If this means deleting all of your control points, then yes, you will generally need to start over, although usually this only results if you try to optimize parameters other than the orientiation and view. If you have a great bounty of control points whose distance is less than 1, you can nitpick further and e.g. delete any that are greater than 0.70 (or some other value).
  25. In the main window, click the Optimize now! button again. Verify that in the Control Points window, the values are still all below 1. If they are not, delete the values greater than 1 and consider adding more control points.
  26. Once you have some experience with Hugin, you may begin to experiment with other optimization parameters (such as lens distortion and translation). If I shot the images from a tripod, I rarely feel the need to use these, but for panoramas and other less-than-ideal scenarios, they can be very beneficial.
  27. Once you are satisfied with the optimized control point list, select the Stitcher tab.
  28. In the projection (f): drop-down, select Rectilinear. This will result in an image of the same projection as with a standard camera lens. You can of course experiment with other projections, but for single-frame shots I find that all the other options are either gimmicky or lack the character of a real camera lens.
  29. Click the Calculate Field of View button. This should populate the two adjacent fields with values dependent on the focal length and crop factor you specified when opening the images. For example, a 200mm lens with a crop factor of 1.5 will result in values of 6 and 4, whereas a 28mm lens with the same crop factor will result in values of 46 and 32.
  30. This step is optional, but I like to have more control over the cropping - increase the two Field of View values by about 10%. This will result in an empty border around the results, which I will trim away to my own liking in the image-editing software.
  31. Click the Calculate Optimal Size button. This should cause the two Canvas-Size values to adjust to numbers that are reasonably close to the resolution of the TIFFs you imported (plus a bit of headroom if you chose to follow the previous step).
  32. In the Panorama Outputs: section, make sure the Format: drop-down is set to TIFF.
  33. Set the Compression: drop-down to Deflate. This will result in smaller files (without compromising image quality) in nearly every case.
  34. Note: if either of the above drop-downs are greyed out, make sure at least one of the checkboxes in this section is checked. You will uncheck it in the next step, but at least one must be enabled to change these options.
  35. Uncheck all boxes in the Panorama Outputs: section.
  36. In the Remapped Images: section, check the No exposure correction, low dynamic range box, and uncheck any others that are checked.
  37. Open one of the Panorama Preview windows, by using one of these methods:
    • Click one of the toolbar buttons which has an icon of a blue sky with a green landscape silhouette at the bottom. The one marked GL should give better performance on a system with decent 3D graphics hardware.
    • From the View menu, select Fast Preview window or Preview window.
    • Press CTRL-P on your keyboard.
  38. Verify that the result looks more or less as expected - you should see the first exposure in the list on top, possibly with one or more of the others visible behind it depending on how well they were aligned. If the appearance is radically different, you will need to go back and verify the previous steps. You can use the numbered buttons to turn on and off the various images to make sure that everything more or less lines up between them.
  39. Close the preview window.
  40. Save the panorama (I call the file "Panorama.pto" and put it in the same directory as the TIFFs).
  41. Select the Stitcher tab. Click the Stitch now! button.
  42. Choose a name prefix for the output file. I use the same as the name of the panorama file ("Panorama").
  43. If you followed all of these steps, you should now have three new TIFFs in the output directory.
  44. Close Hugin.

Now that you have a set of aligned versions of the exposures, you can proceed on to the next article, Postprocessing, although there is additional information below that may be of use.

Temporary Modifications That Make Matches Easier (In Some Cases)

As previously mentioned, Hugin was not designed to align multispectral images. Its "similarity-detection" algorithms are heavily geared towards images of human-visible-light, or at least images that are all from the same spectral band. Frequently Hugin and/or Autopano-SIFT will fail to detect similarities that are obvious to us, such as areas on the boundary between earth and sky in a standard image (in which the ground is usually darker than the sky) and a near infrared version of the same photo (in which the sky is usually darker than the ground).

If automatic control point detection fails (and/or worse, manual control point creation results in errors due to low similarity), you can sometimes "help out" the software by temporarily modifying the near infrared version. I have only once ever had to perform this type of step for an ultraviolet-A photo (the "DIY punk spectrum" from A Detailed Introduction), but it does occassionally come in handy there.

The easiest approach is to invert one (but not both) of the images. This will make e.g. the sky/ground boundary mentioned above match between them.

You can also perform other processing, such as edge detection or modifying the brightness/contrast. Remember to keep an unmodified version of the image to revert to once the control points are created!

Inverting One Exposure To "Help Hugin Out"
[ RGB vs NIR (Normal) ]
RGB vs NIR (Normal)
[ RGB versus NIR (Inverted) ]
RGB versus NIR (Inverted)
 

In the first image, you can see that most of the high contrast keypoints between the two exposures (from my photos of Yellowstone National Park have their brightness reversed. That is, in the RGB exposure the features may be dark against a bright background, but in the NIR exposure they are bright against a dark background, preventing Hugin from recognizing them. The second image illustrates the technique of inverting one of the exposures to allow Hugin to detect the similarity, which is how these keypoints were generated. Remember to revert to the original version before stitching and compositing!

 

Multispectral Panoramas

It is only a small logical step to use a modified version of this process to create a multispectral panorama, but the amount of work required is often far greater, and in my experience the results are only rarely worth it (although there are some notable exceptions). Assuming you captured the requisite exposures, the above processes can be modified in this way:

  1. When importing the exposures into Hugin, import all of the panorama components.
  2. When using Autopano-SIFT to generate control points, I've obtained the best results by first generating matches between all of the human-visible exposures, then generating matches between each of those exposures and the other exposures which correspond to the same shot. That is, I do not generate matches between the various near infrared shots, only between each of them individually and the corresponding RGB image. I use the human-visible exposures as the "chain" or "spine" that connects everything together. For particularly problematic panoramas, I will create control points that tie the RGB exposures together with the adjacent near infrared and/or ultraviolet-A shots, but I will still not attempt to link any near infrared to any other near infrared exposure directly.
  3. When using the New Lens button, select all of e.g. the near infrared exposures at once, so that they all end up with the same lens number. The same is true for any other spectral band (other than human-visible, which as the reference point remains unchanged). IE for the sort of three-exposure work I do, all of the RGB shots will have a lens number of 0, all of the NIR versions will have a lens number of 1, and all of the UVA images will have a lens number of 2.
  4. When optimizing the control points of a multispectral panorama, you must lower your expectations. I rarely obtain values lower than 3-5, and virtually never see many that are below 1. This does mean that the alignment will not be as good, but it is essentially unavoidable for multispectral panoramas.
  5. A rectilinear projection will not work well for anything beyond a 2-shot-wide panorama, so you'll need to pick one of the other options (I typically select Equirectangular).
  6. Keep in mind that the resulting composite will almost certainly exhibit registration errors between spectral bands. This is because the stitching software (enblend, etc.) attempts to perform its work along boundaries it determines are the least-obtrusive. Due to the differences between spectral bands, these locations will often be different, and so the result will be that some elements of the panorama are offset between the versions.
"Cross-Stitching" Multispectral Exposures
[ Cross-Stitching Example 1 ]
Cross-Stitching Example 1
[ Cross-Stitching Example 2 ]
Cross-Stitching Example 2
 

As noted above, I've obtained the best results with multispectral panoramas by avoiding keypoints that link adjacent exposures of the same type (other than human-visible light, of course). These two images (from Vasquez Rocks - see Drive 2007 - Day 06) illustrate my "cross-stitching" technique. In the first, the second NIR exposure is associated with the first RGB exposure. In the second, the first UVA exposure is associated with the second RGB exposure. This is in addition to keypoints that associate the first NIR and UVA exposures with the first RGB exposure, the second NIR and UVA exposures with the second RGB exposure, the first RGB exposure with the second RGB exposure, etc.

 
 
Footnotes
1. Not to be confused with the far-more-common 16-bit-per-pixel image format. See the image-editing software section of Other Tools.
2. Most DSLRs, for example, support at least 12-bit-per-channel RAW files. Newer/higher-end models support 14-bit-per-channel, or even 16-bit-per-channel. 8-bit-per-channel images provide 256 distinct possible brightness values per channel. For RGB images, this gives a total space of 256^3 or about 16.8 million colours, which sounds like a lot until you do the same math for 12- and 16-bit-per-channel images and discover that the former provides about 68.7 billion and the latter provides over 281 trillion distinct colours. However, a more important concern is with regards to greyscale, because that's how the near infrared and ultraviolet exposures are interpreted. 256 shades of grey is very poor fidelity compared to the 4096 offered by 12-bit-per-channel or the 65536 provided by 16-bit-per-channel formats.
3. Like all of the information in these articles, the Autopano-SIFT-C arguments listed are the ones that have worked best for me, not some mathematically-ideal super-values.
4. As with virtually all rules, there are exceptions to this one. If you were unable to obtain any matches (even manually) between the human-visible exposure and one of the others, you can attempt a match between two of the other exposures. This usually fails as well (meaning you will either have to abandon the inclusion of the problematic image, or fully manually specify control points with no automatic fine-tuning).
5. If you have an interest in experimenting with the other optimization parameters (for example, to correct various types of lens distortion), don't let me stop you. I've just found that I prefer the results when minimal interference is involved. Hugin is also a potentially very complicated piece of software. If you are interested in exploring the full range of its capabilities (HDR, etc.), I suggest reading the Hugin project's documentation.
 
[ Page Icon ]