Digital Image Processing of Remote-Sensed Data
Structure of Digital Images
The structure of a digital image refers to how the image is represented and stored using pixels and numerical values. A digital image is essentially a matrix (grid) of numbers that correspond to brightness and/or color information.
Photograph Versus Digital Images
The main difference between a photograph and a digital image is that the photograph has an analog format and is usually printed on paper before being interpreted. The analog format is a format that saves all the data continuously. There are no sharp edges between one part of the photograph and another. The black and white photo in Figure 15.1 was taken in the visible part of the spectrum. Photos are typically recorded over the wavelength range from 380 to 900 nanometers (nm)—the visible and reflected infrared. Analog images, such as photographs and maps, can be converted into digital format by a process known as digitization, where the photograph is displayed in a digital format by subdividing the image into small equal-sized and shaped areas, called picture elements or pixels and representing the brightness of each area with a numeric value or digital number. Indeed, that is precisely what has been done to the photo in Figure 15.1, bottom left.
Digital Image Display
Digital image display in remote sensing refers to the process of visually presenting data collected by remote sensing systems, whether airborne or ground-based, on a computer screen as a digital image, where the information is represented by individual pixels, each containing a numerical value representing the intensity of reflected light from a specific area on the Earth's surface, allowing for analysis and interpretation of geographic features through visual representation on a digital platform.
Monochromatic Display
Any image, either a panchromatic image or a spectral band of a multispectral image, can be displayed as a black and white (B/W) image by a monochromatic display. The display is implemented by converting DNs to electronic signals in a series of energy levels that generate different grey tones (brightness) from black to white, thus forming a B/W image display.
True Color Composites
A natural or true color composite is an image displaying a combination of visible red, green, and blue bands in the corresponding red, green, and blue channels on the computer (Figure 15.4).
False Color Composites
False color images represent a multi-spectral image produced using bands other than visible red, green, and blue as the red, green, and blue components of an image display (Figure 15.4).
Bit Depth
Bit depth refers to the number of bits used to represent the color or intensity of a single pixel in an image or a single sample in digital audio. It determines the amount of information that can be stored for each pixel or sample, directly affecting the resolution and quality of the data. Standard bit depths include 8-bit (which represents 256 colors per channel), 16-bit (which represents 65,536 colors per channel), and 24-bit (which represents 16.7 million colors).
Digital Image Formats
The image data acquired from Remote Sensing Systems are stored in different types of formats viz. (1) band sequential (BSQ), (2) band interleaved by line (BIL), (3) band interleaved by pixel (BIP). It should be noted, however, that each of these formats is usually preceded on the digital tape by “header” and/or “trailer” information, which consists of ancillary data about the date, altitude of the sensor, attitude, sun angle, and so on.
Click on the following topics for more information on digital image processing of remote-sensed data.

