Culture, Climate Science & Education

SECTION FOUR
Characteristics of Imagery
Click the Bubbles Above or the Double Arrows () to Explore this Section
A. Introduction
NASA’s fleet of earth observing satellites. Credit: NASA's Goddard Space Flight Center. 2017 April 19.
As you will see though this course, remote sensing allows us to gather an amazing range of information about the natural world. Hopefully we will use knowledge make better decisions regarding how we use, manage, or preserve our limited natural resources.
Remote sensing is part of our toolkit, but what sensor matches what purpose? Just as you would not want to fish for northern pike with a trout fly, you would not get very far monitoring weekly spring green-up with annual aerial photography.
In this section you will learn characteristics of imagery, particularly resolution—spatial, temporal, spectral, radiometric—and positional accuracy. You will then know what aerial or satellite sensor best fits the question or need at hand.
Click the Bubbles Above or the Double Arrows () to Explore this Principle
B. Photographic vs Digital Images
Click image to expand

Aerial Photograph over the East Fork Trinity River, TX. USDA AFPO
Images used in remote sensing may be captured on traditional photographic film or, more commonly now, by an electronic sensor. Incremental advances in digital remote-sensing cameras have resulted in the gradual replacement of film with digital imagery.
Photographs are created when light entering a camera activates light-sensitive chemicals embedded in the film’s emulsion layer. The size of these light-sensitive silver halide crystals determine the image’s resolution (graininess). Film’s microscopic photosensitive grains lead to high resolution, an advantage over similarly priced digital cameras (as of 2018).
Large format, specially calibrated “metric” cameras, mounted on aircraft, give us the ability to take measurements from high-resolution photographs.
Click images to expand

Donald E. Simanek. 2008. Stereoscopy and Illusions. Accessed 2018 Dec 20. http://www.lockhaven.edu/~dsimanek/3d/stereo/3dgallery.htm

Dolores River gorge, Colorado. Stereoscopy significantly enhances our ability to interpret aerial images—and can be used to create elevation maps. Without a stereoscope, the technique for viewing depth is similar to that of Magic Eye®3D posters, where for example, a sailboat seems to emerge from the poster. Source: Bob Stahl. Aerial Photos & Stereo Photography. Accessed 2018 Dec 20. http://www.geocities.ws/potatotrap/tech/aerialx.htm
Analysis with photographic imagery
In regard to the quantitative measurement of spectral signatures (a key feature of modern remote sensing) film-based imagery is more limited than digital imagery. However, much information was (and can still be) collected with photographs through analysis called photogrammetry and photointerpretation.
Photogrammetry refers to measurements such as location, distance, & size of image features. Stereoscopy techniques create 3D visualizations and are used to create terrain & elevation datasets. For fun, click on the button at the bottom of this page to view a gallery of vintage stereo photo pairs. It demonstrates how stereo vision increases detail, depth, and context for interpreting aerial photographs. But before you do, you might click here to see hints on how to view 3D stereo pairs.
Photointerpretation is figuring out “what is going on” in a photograph—identifying features in a photo and determining their significance. This is still very much a part of remote sensing; skilled image interpreters work like detectives, using clues such as object location, size, shape, shadow, tone/color, texture, pattern, height/depth, and site/situation/association.
Antique Stereo View Cards for Parallel Viewing |
Early form of stereoscope. |

The Holmes-Bates Stereoscope
US patent US00232649
Early history of stereo photography.
Stereo photography flourished in the early history of photography. These images are from stereo view cards, and these are often in sad condition. Yet we are fortunate to have this glimpse into an era when photography was successfully done under conditions that would deter all but the most dedicated photographer today.


S17 Photographer's Studio. (c. 1870?) A well-equipped photographer's studio needed a skylight to admit lots of sunlight, portable adjustable reflectors (in background), and decorative curtains. We see here the camera on an ornate tripod. A stereo lensboard is on the desk. The bird cage on the desk may be a prop. The photographer and his assistant (probably his wife) are relaxing on a slow day.


Floating Photographic Gallery of J. P. Doremus. 1872. This enterprising photographer has his whole photographic operation on a barge in the river. Note the studio portion in the middle, with skylight and glass-windowed walls. I have no information about how Mr. Doremus used this studio, but it seems wonderfully practical. He could maneuver the boat on the water so that the sunlight came into the studio at the desired angle, enhanced by reflected light from the water surface.


#10 The 13" Mortar "Dictator"—Grant's Military Railway (before Petersburg). WPEC. VM13-17. From the Library of Congress collection. The Civil War was documented by Matthew Brady's photographers, in stereo. One of these mortars was featured in the Buster Keaton film The General. Matthew Brady seldom ventured far from his elegant studios in Washington, D.C. The civil war photography was done by his photographers, notably William Henry Jackson and Timothy Sullivan. After the war they went on to document the Westward expansion in stereo photographs.


Camping Out, Colorado. Kilburn 2951. Reprinted and re-issued view cards often have incomplete and misleading labeling. A little digging reveals that this is no ordinary camping party, but an 1877 foray into the Colorado Rockies by a team of botanists, their wives and friends, a guide and two negro servants. The Harvard botanist Asa Gray [1810-1888] is seated on the ground, with a drying stack for plant specimens in his lap and other plant specimens lying nearby. Sitting near him is botanist Sir Joseph Dalton Hooker [1817-1911], visitng from England. The photograph was taken by William Henry Jackson [1843-1942] at La Veta Pass, Colorado. You can see a clearer and larger non-stereo picture of this scene at the Asa Gray Herbarium site of Harvard, second item. Click here for a high quality scan of that picture. Note that there are significant differences between the two, suggesting that Jackson posed his subjects and then took a number of photographs of the same scene.


S45 Jumper at Stand Rock, Delles of the St. Croix, Wisc. by Henry Hamilton Bennett. 10269 © 1895 Kilburn. Gelatin dry plates, introduced in the 1880s, were much more light sensitive. Short exposures were now possible, and cameras acquired shutters. At last action could be photographed, and photographers (and daredevils) exploited the possibilities. [Note that this same picture is depicted in the drawing of the stereo viewer at the top of this document.]


S49 Glacier Point, Yosemite Valley, Cal. 5004 Keystone © 1907 Lingley. If there was a high point, outcropping, or precarious perch, stereo photographers found it.


S53 Balancing Rock (300 tons) Garden of Gods, Colo. © 1894 Strohmeyer and Wyman, Underwood and Underwood. Tourists found humor in balancing rocks. Note how the woman is helpfully compensating for the extra weight of the man. A little physics here.


#26 Steamboating on The Father of Waters, St. Louis. © 1904.
Victorian parlor entertainment.


In the late 18th century, family entertainment consisted of a collection of stereo view cards, some stereoscopes to view them, and, in wealthier households, a cabinet viewer. This picture is of a reconstruction of a Victorian parlor, at the Smithsonian Institution Museum of American History.


16332 Cutting department of B. W. Kilburn & Co's Celebrated Stereoscopic View Factory.
© 1905 by B. W. Kilburn.The Stereograph as an Educator
—Underwood Patent Extension Cabinet in a home Library. Copyright 1902 by Underwood and Underwood.

Evolution of the sickle and flail. 33 horse team combined harvester. Walla Walla, Washington. Copyright 1903 by Underwood and Underwood.


The Great pyramid of Gizeh, a tomb of 5,000 years ago, from S.E. Egypt. Stereograph. NY: Underwood and Underwood, 1908.


Unidentified image. Isambard Kingdom Brunel (1806-1859), British engineer; designer of the Bristol Suspension bridge, the SS Great Britain, the Great West Railway (London to Bristol) and ships such as the Great Eastern. Here he's posing by the anchor chain of the Great Eastern during her construction at Millwall. This image is often seen reproduced 2d in books. [Thanks to Alexander Lentjes for identifying this picture.]


Gossip—At every sip a reputation dies. Copyright 1899 by Strohmyer and Wyman.


S50 Ludgate Hill, London, England. c. 1896 S. & W. Underwood. Travel pictures were very popular, since most people could not afford travel to other countries, but could easily afford stereo viewcards of exotic places.


13721 Second Ave from Yester Way, Seattle, Wash. Copyrighted. Keystone View Company #220. Cards in condition this good are usually reprints (copies) from earlier cards. While the card says "coyrighted", no date is given. The photographic image is a single photographic print, die-cut to the "arched window" shape. Earlier cards had two separate photos hand-aligned when they were glued to the card stock. While view cards of cities and landscapes have historic interest, they, like this one, have no depth, for everything in the scene is beyond 50 feet, they are no better than 2 dimensional photos. This one is part of a set of 600 cards reprinted from the Keystone archives for use in schools. Lantern slides were also available. A full list of the cards in this "600 set" is given in the printed teacher's guide Visual Education. The "600" Set was sold to schools between 1905 and 1924.


No title. J. W. & J. S. Moulton, Salem, Mass. New Series American. Stereo views often showed everyday activities using the machinery of the time. A few small blemishes have been retouched in this digital copy.


9005 Burro Train—Gold from Virginius Mine near Ouray, Col., U.S.A. Keystone View Co, c. 1898 by B. L. Singley. Cards were often reprinted by copying from earlier cards. These often had muddy dark areas and excessive contrast. I have improved the gamma on this one to bring out details.


Track workers on a hand car in the Utah desert. E. & H.T. Anthony Stereoview #7148.


No title. This may not be a commercially issued card. Often a viewcard is in dismal condition, with faded areas and foxing spots. Still, it may show something of historical interest. George Pek writes: "I think your stereo ... is possibly the "Topaze" British Crusier. It is the only ship with three smoke stacks of all 21 ships of the Channel Fleet." See Topaze Photos.


Fast friends passing the gates of Sleepy Land. Copyright 1905 by Underwood and Underwood.


Tired of Play. c. 1898 by Strohmeyer and Wyman. Pictures of children were popular in Victorian times, especially when shown sleeping with their dolls, toys, or pets. This picture was so badly framed that I had to correct the stereo window by cropping.


2697 Days Work Done. Young Folks Series. C. W. Woodward, Publisher, Rochester, NY. This one required gamma correction, for the card's image was so muddy that details of the toys could not be seen clearly. Note the table set for tea.


Fairbanks residence, by J. Bullock is handwritten on the back of this card. This card was in sad condition. It had mildew spots, the two pictures had faded unevenly, and the mounting was carelessly done. Still, it is a record of an architectural style that was considered quite grand in its day.


86. Reducing the surplus. "Now pull hard!" Copyrighted 1899 by T. W. Ingersoll Late on the scene were color lithographed view cards. These were usually printed on thin cardboard, and flat. The printing process was three-color, like pictures in magazines, and color registration wasn't always good. The pictures were usually "colorized" from black/white originals. Sometimes these cards were later trimmed for mounting in penny-arcade viewers that could display a sequence of pictures, often telling a story. This process often reduced the overall size of the card and punched a rectangle out of the lower edge center, for the viewer's guide rail. Antique dealers often inflate the prices of these, to a price not related to quality, but simply because of the novelty of color.


On the Beach at the Bath House. 1902. T. W. Ingersoll. Turn of the century swimwear models. The dressing rooms were on wheels.(Image courtesy of Brenda Heberling.)


Liberated Woman. The liberated woman relaxing while her husband does the housework.


Japanese Garden. Color lithograph.


3865 Charming Geisha Girls of Japan in a lovely garden at Hiroshima. ©1906 by H. C. White.
More antique view cards to come. Watch this page.
For a nice collection of over a thousand stereo view cards, see: The World of Stereo Views.
More cross-eyed stereos in 3d Gallery One.
Still more cross-eyed stereos in
3d Gallery Two.Building a digital stereo close-up photography system in 3d Gallery Four.
Review of the Loreo stereo attachment 3d Gallery Five.
Review of the Loreo macro adapter, 3d Gallery Five B
The Loreo stereo attachment—improved 3d Gallery Five C.
The Loreo LIAC attachment as a 3d macro device, 3d Gallery Five D.
Wildlife photography in your backyard, 3d Gallery Six.
A home-built digital stereo camera using mirrors 3d Gallery Seven.
Stereo close-up photography in your garden 3d Gallery Eight.
Stereo photography in your aquarium 3d Gallery Nine.
Stereo digital infrared photography 3d Gallery Ten.
Wider angle stereo with the Loreo LIAC 3d Gallery ll. A failed experiment.
Review of the Fuji FinePix REAL 3D W1 camera. 3d Gallery 12.
Macrophotography with the Fuji 3D camera. 3d Gallery13.
Panoramic stereo photography. 3d Gallery 14.
Tips for stereo photography with the Fuji 3d camera. 3d Gallery 15.
Mirror methods for stereo photography. 3d gallery 16.
The Fuji 3d macro adapter using mirrors, by Paul Turvill.
The Fuji 3d macro adapter with flash! 3d gallery 17.
Critters in stereo. 3d gallery 18
Wide angle stereo. 3d gallery 19.
Telephoto Stereo. 3d gallery 20.
2D to 3d Conversion. 3d gallery 21.
Stereos from outer space. 3d gallery 22.
Review of the Panasonic Lumix 3d digital camera. 3d gallery 23.
for shadowless lighting.
Digital stereo photography tricks and effects.
Shifty methods for taking stereo pictures.
Stereoscopy with two synchronized cameras by Mike Andrus.
Guidelines for Stereo Composition.
Return to the
the 3d and illusions page.Return to Donald Simanek's front page.
Click the Bubbles Above or the Double Arrows () to Explore this Principle
C. Digital Images
Click image to expand

Source: Potash & Phosphate Institute (PPI), ), www.ppi-far.org/ssmg
Digital imagery was developed to overcome limitations of film-photography—such as the ability to precisely calibrate spectral reflectance and to transmit images from space. The first Corona spy satellites (1959-1972) parachuted film canisters back from space; film was developed days or weeks after commencing the launch-recovery process. NASA’s Explorer 6 transmitted the first crude images of earth orbit in 1959, via “slow scan television” technology. The KH-11 U.S. reconnaissance satellites were the first to feature CCD digital sensors in 1976 and revolutionized the ability to deliver real-time imagery from space (Vick 2007).
For decades, aircraft-based imagery used film, due to its higher resolution and lower cost. By 2009—with falling costs of digital sensors and increased demand of “4-band” imagery including visible & near infrared wavelengths—most NAIP aerial image surveys used digital sensors.
Drones and UAVs (unmanned aerial vehicles) are emerging sources of high-resolution digital imagery.
Digital imagery, both aerial- and space-based, will be the focus of this course. Benefits of digital imagery include:
- Precise calibration of spectral reflectance values, compared to chemical & mechanical inconsistencies in film
- More spectral bands in a wider range of wavelengths
- Capture imagery at lower light levels
- Greater dynamic range (record a wider range of light levels)
- Digital image processing with computers
- Ease of electronic sharing
- Stability of storage, compared to film, which degrades over time
Digital Sensor Design
Click image to expand

For more information about sensor design, visit the above “Cambridge In Colour” url (brief overview) or the following article link for a detailed overview of current sensors: https://www.intechopen.com/books/multi-purposeful-application-of-geospatial-data/a-review-remote-sensing-sensors
Source: Cambridge In Colour. Copyright © 2005-2018. Digital Camera Sensors. Accessed 2018 Dec 22. https://www.cambridgeincolour.com/tutorials/camera-sensors.htm
Digital sensors capture images with a grid array tiny of light sensitive detectors, or “photosites”. Each photosite corresponds to one image pixel; pixel count is a way to describe camera format size, or image detail. A high-end “large format” sensor array with 17,310 x 11,310 photosite rows/columns contains 195,776,100 pixels—referred to as a 196 megapixel camera. In addition to a “framed” rectangular array of photosites, other design options include “pushbroom” and “whiskbroom” scanners.
To measure reflectance separately for each band, some sensors use color filters over one sensor array, while other designs use a prism to separate wavelengths to separate photosite detector arrays.
Click the Bubbles Above or the Double Arrows () to Explore this Principle
D. Color and False-Color Imagery
Computer displays and television screens create color by blending the primary colors—Red, Green and Blue—this “RGB” system can produce any color. Everyday digital camera (e.g. camera phones) produce one file per picture—the colors have already been mixed.
Click image to expand

Source: Humboldt State University. http://gsp.humboldt.edu/olm_2015/courses/gsp_216_online/lesson3-1/bands.html
Remote Sensing Multispectral Scanners (MSS) record each band (wavelength range) separately. The result is a separate grayscale image “layer” file for each band. Bands are saved separately so that spectral analysis & image processing are possible. This system also allows us to display “invisible” bands like infrared, because we can display any band as any RGB color.
Normal Color Images
MSS images can be displayed in “normal color” by displaying the red band as red light on a screen, green band as green, and blue band as blue.
Color Mixing
Visit this website and experiment with blending different intensities of red, green, and blue light to create other colors. In this “additive color” system, combining all visible wavelengths creates white light.
Optional Activity: Additive Colors and Color Perception
Go here to explore additive colors and color perception.
View Landsat 8 Band Reflectance in Grayscale & Color
Visit this website to explore Landsat 8 band reflectance by following these steps:
- From the above link, choose the RemotePixel viewer. On the viewer’s map page, zoom to your state, or area of your choice.
- Click on the map to select the image “tile” of your area, then choose the most cloud-free image thumbnail on bottom list. Try different zoom levels after your image loads. The question mark icon will display steps and a help menu.
- Choose the middle tab (screenshot) and view each band—reflectance for each band is shown in grayscale. Band 5 is Near Infrared – write down two surfaces the most and least NIR radiation.
- Band 2 is Blue light, what features reflect the most and least blue? What does water look like, compared to Band 5?
- Landsat 8 bands 10 & 11 are Thermal Infrared. What is different about these bands?
- Next view normal and false color band combinations. On the top right of the Remove Pixel viewer, click the 4-dash tab—it is the left of three tabs (screenshot).
- Experiment with different band combinations. Note the band numbers and use the graphic to see what wavelength region each band is from.
- Interpret your image, zoom into unknown features and identify what they are. Write down one observation for each band combination.
Click image to expand

In this CIR image, healthy vegetation with access to water (riparian, irrigated) contrasts with dry grasslands. Semi-arid Florence MT, September 28, 2012. Source: USGS NHAP
False Color Images
Displaying colors other than “what they are” creates false color images. To see invisible wavelengths, for example, you could display invisible Near Infrared (NIR) light as Red on your computer screen. That means actual red light reflectance is shuffled and displayed as Green on your screen. Green light reflectance is displayed as Blue. This occupies all the RGB display slots and actual blue light (blue band) is not displayed.
This particular “band combination” is called Color Infrared (CIR). It is commonly used because healthy vegetation stands out from other areas, and water contrasts well with land. Different band combinations highlight different features or phenomenon. Later in this course, you will learn band combinations that highlight snow/ice cover and wildland fire burn intensity.
Band Combinations
Visit this website and follow these steps:
- View several band combinations of Landsat, MODIS, and ASTER satellite images. Begin by creating a Landsat color infrared—click band “4” on the Red bar, “3” on Green, and “2” on Blue. What color is healthy vegetation? What color(s) is water?
- Create two band combinations for both MODIS & ASTER. Use arrows to right of text box to scroll down for ideas. Name one thing each of your 4 band combinations highlights.
Click the Bubbles Above or the Double Arrows () to Explore this Principle
E. Pixels and Pixel Values
Click image to expand

Source: Canada Centre for Mapping and Earth Observation, Natural Resources Canada. 2016-08-17. Accessed 2018 Dec 21. https://www.nrcan.gc.ca/earth-sciences/geomatics/satellite-imagery-air-photos/satellite-imagery-products/educational-resources/14641
Remote sensing images are a grid of pixels, just like pictures from a camera phone. Each pixel each contains one value—surface reflectance, averaged over the ground area the pixel covers. Remote sensing cameras are calibrated to a degree that each pixel’s reflectance value is scientific data, not merely a color-value used to create a backdrop image.
A digital image composed of pixels (aka cells) is called a raster. All aerial and satellite images are raster datasets. Some GIS datasets display features with points, lines, or polygons—these are called vector datasets.
Radiometric Calibration
In the last activity, pixel values ranged from 0 – 255. An 8-bit image can distinguish 256 levels of brightness or shades of gray. These non-calibrated pixel values are referred to as raw “Digital Numbers” (DN)—these have not been corrected for the effects of atmospheric absorption, sun angle, sun intensity, etc. Unprocessed DNs can be used for purposes of displaying scenes or some image analysis (“band ratio” techniques cancel out most atmospheric and solar effects).
Click image to expand

Source: http://gsp.humboldt.edu/olm_2015/courses/gsp_216_online/lesson4-1/radiometric.html. Accessed 2018 Dec 26
Other remote sensing analyses require radiometric calibration—corrections for atmospheric, & solar influences on electromagnetic energy before it reaches the sensor. Raw digital numbers can be corrected to radiance—the radiative energy from each pixel (units of watts per steradian per m2 per μm). Radiance can then be converted to reflectance—the percent of light striking an object that is reflected. Surface reflectance is the “preprocessed” corrected pixel value used to identify features with spectral reflectance analysis.
Activity Four
(Click to go to Activity Four)
Pixel Reflectance Value
Visit this website and follow these steps:
- Move a medium-resolution pixel (30-m Landsat) over high-resolution images.
- Complete this activity to see how sensors record the average surface reflectance within each pixel.
- What is the highest and lowest pixel value you can find for each of the three images?
Extra Learning
(Click to Learn More)
Click the Bubbles Above or the Double Arrows () to Explore this Principle
F. Spatial Resolution
Click image to expand

30 vs 2.5 meter Deadhorse Airport North Slope, Alaska.
Source: Alaska Geospatial Council. http://agc.dnr.alaska.gov/imagery.html
The next several pages review aspects of resolution—detail level we can gather from an image. Images differ in resolution spatially, temporally, spectrally, and radiometrically.
The spatial resolution of an image is determined by pixel size—you can’t zoom into a pixel and view additional detail.
Course resolution satellites (e.g. MODIS with 1 km to 500 m pixels) lack spatial detail but can afford a wide image swath every orbit—re-imaging every location on earth every 1-2 days. Moderate resolution satellites (e.g. Landsat 30 m, Sentinel-2) have given us continuous earth coverage since 1972; a strength is landscape-scale change over time. Free, public, high resolution imagery is available from aerial surveys. USDA NAIP, generally 1 meter pixel, 4-band imagery, are re-flown every ~3 years for the continental US.
Commercial satellites offer < 0.5 meter imagery for a price. Drones are increasing used to gather sub-meter custom aerial surveys for purposes such as archaeology, weed management, & natural resource management.
In a 30 meter image, what do you think is the smallest feature you could identify?
The original pixel size at which a digital image is collected is called ground sample distance (GSD). Landsat 8 features 30 x 30 m pixel imagery; the GSD is 30 meters.
- Visit this website and compare pixel size for three sensors.
- Click all three sensors (display all) and zoom into the smallest tile (IKONOS)
- What is the pixel size for this IKONOS image? What is the smallest feature(s) you can reliably identify?
- Zoom out so you can see some Landsat imagery around IKONOS. Turn IKONOS on/off. Notice how Landsat generalizes each pixel—each cell may contain both trees, bare ground, and/or road. Zoom out more—what is the smallest feature(s) you can reliably identify with Lansdat? What is the pixel size?
- Zoom out again so you can see the entire Landsat image with a little surrounding MODIS. What is the size of a MODIS pixel? Turn Landsat on/off and see how MODIS generalizes reflectance within a pixel.
- Zoom out once more, until you can identify features with MODIS—what are the smallest features you can identify?
Click the Bubbles Above or the Double Arrows () to Explore this Principle
G. Temporal Resolution
Temporal resolution is the frequency of image capture. Aerial surveys flown once every 3rd summer could possibly detect change over a decade, however fluctuations within that time would be lost. Earth observing satellites re-image locations daily to bi-weekly—this allows us to see finer detail as to when things happen, such as spring greenup.
Watch the video at right and then answer the following:
- What is the temporal resolution of Landsat 8?
- Write down three climate science related topics/questions that would benefit from the temporal resolution of Landsat, compared to an aerial survey.
Click the Bubbles Above or the Double Arrows () to Explore this Principle
H. Spectral Resolution
Click image to expand

Spectral bands that selected instruments on Earth-orbiting satellites record. The bandwidths of past, present, and experimental instruments are shown against the spectral-reflectance curves of bare glacier ice, coarse-grained snow, and fine-grained snow. The numbers in the gray boxes are band numbers of the electromaganetic spectrum. The Hyperion sensor records 220 bands between 0.4 μm and 2.5 μm; in the diagram the individual bands are not numbered. Instruments are abbreviated as: ASTER, Advanced Spaceborne Thermal Emission and reflection Radiometer; ALI, Advanced Land Imager; ETM+, Enhanced Thematic Mapper Plus (Landsat 7); MISR, Multiangle Imaging SpectroRadiometer; MODIS, MODerate-resolution Imaging Spectroradiometer (36 bands, of which 19 are relevant to discrimination of snow and ice); MSS, MultiSpectral Scanner (Landsats 1–3); and TM, Thematic Mapper (Landsats 4 and 5)
Sensors with more, narrower bands can see finer details in spectral reflectance curves. Hyperspectral sensors, with hundreds of narrow bands, have high spectral resolution and can detect reflectance changes over small pieces of the electromagnetic spectrum.
Our ability to identify and classify features depends on spectral resolution. Multispectral sensors can separate needleleaf from broadleaf trees—hyperspectral scanners can often identify separate species.
- Visit this website and experiment overlaying different sensor bands on different landcovers.
- What sensor would reproduce the most accurate spectral reflectance (i.e. have the best spatial resolution)
- What sensor has the lowest spatial resolution?
- Name two landcover types with very similar reflectance patterns.
- Name two types with very different reflectance.
Visit this website and follow instructions to view spectral reflectance of several minerals and landcover types, as viewed from different sensors’ bands.
Click the Bubbles Above or the Double Arrows () to Explore this Principle
I. Radiometric Resolution
Click image to expand

An 8-bit image, redisplayed at decreasing radiometric resolution (i.e. gray levels). 1-bit imagery only displays two shades, black & white. Original image was 15 meter, panchromatic band from Landsat 7. Credit: Ant Beck, 2012 Dec 7. https://commons.wikimedia.org/wiki/File:Decreasing_
radiometric_resolution_from_L7_15m_panchromatic.svg#file
- Visit this website and and read the short article, highlighting sensor improvements from Landsat 1 to Landsat 9
- The bottom “Dwell on this” paragraph describes the advantage of higher radiometric resolution—12-bit imagery from Landsat 8 & 9, compared to previous Landsats.
- Use the image slider to compare Landsat 8 & 7 imagery.
- Write down your observed differences.
Radiometric resolution is the number of brightness levels (i.e. shades of gray) recorded in an image. This is controlled by the number format, i.e. pixel depth, of the image file. Reflectance data in each pixel is a numerical value. Most imagery is delivered in binary “8-bit” number format—numbers are stored as 8-digit strings of ones and zeros. An 8-bit image can display 28 = 256 shades of gray (i.e. brightness levels); zero in binary format is 00000000 and 255 is 11111111. A 12-bit image (e.g. Landsat 8) can differentiate 212 = 4,096 gray levels.
Human eyes can only differentiate between 40 to 50 shades of gray (Aronoff 2005)—why then would earth observing satellites typically use 8-, 10-, or 12-bit encoding and radar systems us 16-bit? Higher bit encoding reduces signal saturation in very bright areas. Terrain may be shaded due to topography or clouds—more graylevels allow us to see detail in dimly reflected areas also.
As with any resolution, higher radiometric resolution demands more file storage space.
Click the Bubbles Above or the Double Arrows () to Explore this Principle
J. Image Geometric Correction
Most imagery you acquire has been preprocessed for optical distortions, so that the location of each pixel is known and the image “sits” on the correct location over a map. At times, such as when using scans of older aerial photographs, you need to perform these geometric corrections yourself.
Click image to expand

Steps for preprocessing imagery, including georeferencing, radiometric calibration, and topographic correction. Credit: Young et al (2017).
Image Distortion
Remotely sensed images are frequently combined with map data in Geographic Information Systems (GIS). Maps have one constant scale throughout the scene and are drawn from a perfectly vertical “top down” perspective. Maps have been projected—the process of displaying the round earth surface on a flat map.
Image Rectification
A raw satellite or aerial image cannot be simply overlaid on a map—the perspective view of the camera causes displacement and distortion. Locations and objects do “sit” in the correct place and feature not in the image center are viewed partially from the side and lean away. Objects near the edge of an image however are slightly further from the camera than objects at the
“nadir”, directly underneath. This and terrain relief cause distortions in scale on an image.
The rectification process shifts pixel locations to correct image distortion. Rectification may include orthographic correction—using a digital elevation model (DEM) to remove distortion from terrain—the result is called a orthophoto or orthoimagery. Rectification also usually includes georeferencing, which positions the image correctly on the earth’s surface with ground control points.
Click the Bubbles Above or the Double Arrows () to Explore this Principle
Click the Bubbles Above or the Double Arrows () to Explore this Principle
L. References
Aronoff S. 2005. Chapter 4 Characteristics of Remotely Sensed Imagery. In: Aronoff S. Remote Sensing for GIS Managers. Redlands CA: ESRI Press. p. 69-109.
USGS Spectral Library
Kokaly, R.F., Clark, R.N., Swayze, G.A., Livo, K.E., Hoefen, T.M., Pearson, N.C., Wise, R.A., Benzel, W.M., Lowers, H.A., Driscoll, R.L., and Klein, A.J., 2017, USGS Spectral Library Version 7: U.S. Geological Survey Data Series 1035, 61 p., https://doi.org/10.3133/ds1035
Vick CP. Kh-11 Kennan Reconnaissance Imaging Spaceraft. 2007 April 24, Accessed 2018 Dec 21. https://www.globalsecurity.org/space/systems/kh-11.htm.
Young N, Anderson R, Chignell S, Vorster A, Lawrence R, Evangelista P. 2017. A survival guide to Landsat preprocessing. Ecology. 98. 920-932. 10.1002/ecy.1730.