I’ll never forget the first time I set my skull on a stereoscope. Hunched over a lab bench and fumbling with a stereo pair of false color aerial photographs from a file cabinet in the Cloquet Forestry Center, I finally got the thing focused. Before long I was lost in the endless fuzz of red and jack pine stands in Carlton County, MN. You can learn so much about a forest from these images and I was amazed by how we could reconstruct a three-dimensional view of the forest using a relatively simple tool. Ten years later I am a forester-turned-software engineer at NCX and I still find myself learning from the trickery of the stereoscope. Our eyes are highly sensitive instruments and we can use them to interpret a lot more from a two dimensional map than you might expect.
Examine the top left panel in Figure 1. What can you learn about the forest from that image? Even though the image is monochromic we can see trends in vegetation density. Bright spots have no trees, dark spots have many trees, and there are many shades in between. If you look carefully, you can read the texture of the forested areas to learn about the relative size of the trees on the ground.
The top right panel of Figure 1 shows a false color composite image of the same area. This image gives our eyes much more to chew on! The shades of red that are visible in the false color image allow us to distinguish between types of vegetation (conifer vs deciduous). Humans are “trichromats” which means our retinas have receptors for three different colors (red, green, blue), which allows us to visualize the world in color. With the false color image we are tricking our eyes into seeing more than we should! We are doing this by substituting the red band from the NAIP imagery with the near-infrared band, thereby making the near-infrared image visible to the human eye!
Can you tell from the false color image that we are in a mountainous region of Northern California? The imagery alone does not do a good job of characterizing the topography, so we must trick our eyes again. The bottom left image is a hillshade generated by a digitally projected sun that casts shadows across the digital landscape generated from a digital elevation model. By combining the hillshade with the false color image (hint: try the “Multiply” blending mode in the color rendering options for your hillshade in QGIS), we can add shadows to the image that make the topography make sense to our brains, which are constantly interpreting shadows while we try to make sense of the position of objects in our view.
Beyond aerial imagery
Aerial imagery is an incredible tool, and the example above demonstrates how we can manipulate the imagery to make the image more useful when we use them for operational planning, but there is more that we can do. Our ability to read multiple-band images and interpret shadows for depth perception are innate skills that we can use in creative ways.
When we examine aerial imagery our brain is interpreting patterns in the brightness, color, texture, and shadows of the image. We analyze the patterns and make determinations about the features in the image: bare ground, dense conifer cover, hardwoods along riparian zones. This is all useful, but exploratory data analysis techniques like principal component analysis can be used to create images that show patterns more clearly than in the original images. This process analyzes the variation of red/green/blue/near-infrared pixel values across the area of interest and generates a new multi-band image that highlights the distinct features in the image.
In the principal component composite image from Figure 2 (left panel) the blue/purple shade describes shrubby/deciduous cover, red areas are mostly bare ground, and green areas are generally conifer-dominated. Compare that splash of color to the false color composite and notice the increase in contrast! The subtle trends in the red shades of the false color image have been replaced by manic variation in color in the principal component image. Of course you would not want to walk out into the woods with only a principal component composite map, but can you see how it might be helpful for cover type mapping? This is a great example of a way we can let machines do some of the hard work for us!
Figure 3 shows the three principal components separately as red, green, and blue images respectively. When combined into a 3-band composite image (bottom right panel of Figure 3) distinct features pop out of the background
Cover type maps
You can use the same concept as described above to create interesting and useful maps out of raster images of forest cover, too. At NCX we generate rasterized estimates of species-level forest stocking from our Basemap product. By combining species into a set of three groups (e.g. softwood, oak/hickory species, all other hardwoods) we can produce 3-band heatmaps that show stocking levels among the three groups in full color. It is tricky to take a pixel estimate of stocking by species group to generate a reliable forest type assignment like NLCD (right panel of Figure 4), but the continuous nature of the 3-band image is a great visualization tool. Areas that are pink/purple/red in the image are dominated by conifer species (mostly red spruce in this spot), riparian zones and higher elevation hardwoods are dominated by the “other hardwoods” group (sugar maple, yellow birch, etc), and the green midslope regions of the mountains are dominated by oak species.
I hope the ideas in this column get you thinking about ways to create maps that go beyond the traditional display of color or false-color aerial imagery. If you have any questions or ideas of your own I would love to hear from you – send me an email at firstname.lastname@example.org.