What Was The Overarching Topic Of The Paper About Forensic Anthropology?
Tuesday, April 14, 2020
Image Processing and Enhancement Essay Example
Image Processing and Enhancement Essay Remote sensing (RS), also called earth observation, refers to obtaining information about objects or areas at the Earth? s surface without being in direct contact with the object or area. Humans accomplish this task with aid of eyes or by the sense of smell or hearing; so, remote sensing is day-today business for people.Remote sensing can be broadly defined as the collection and interpretation of information about an object, area, or event without being in physical contact with the object. Remote-sensing data play a growing role in studies of natural and semi natural environments, a role that stretches from a visual interpretation to sophisticated extraction of information by advanced image analysis and statistical algorithms. In their raw form, as received from imaging sensors mounted on satellite platforms, remotelysensed data generally contain flaws or deficiencies with respect to a particular application.To extract basic information from remotely-sensed data the flaws or deficiencies must remove or corrected. In this paper I will try to describes some important general means of image correction because it is difficult to decide what should be included under the heading of image correction, since the definition of what is, or is not, a deficiency in the data depends to a considerable extent on the use to which those data are to be put. We will write a custom essay sample on Image Processing and Enhancement specifically for you for only $16.38 $13.9/page Order now We will write a custom essay sample on Image Processing and Enhancement specifically for you FOR ONLY $16.38 $13.9/page Hire Writer We will write a custom essay sample on Image Processing and Enhancement specifically for you FOR ONLY $16.38 $13.9/page Hire Writer So I will discuss the title like image Preprocessing, Digital image, image enhancement and other important titles related to Image correction and better image interpretation means.The other idea raised and discussed in this paper is the relationship between vegetation index and vegetation degradation by using remotely sensed data. 2. Function of image preprocessing and its importance for image Analysis The function of an image preprocessing is a means to applying some methods in order to correct image deficiencies and removal of flaws before using the images for other purposes. Mather and Koch (2011) stated that In their raw form, as received from imaging sensors mounted on satellite platforms, remotely-sensed data generally contain flaws or deficiencies with respect to a particular application.The correction of deficiencies and the removal of flaws present in the data are termed preprocessing because, quite logically, such operations are carried out before the data are used for a pa rticular purpose. 1 Similarly Reddy (2008) discuss the correction of deficiencies and removal of flaws present in the data through some methods are termed as pre-processing methods A Canada Center for remote sensing remote sensing Tutorial also support the idea and describe Preprocessing functions involves those operations that are normally required prior to the main data analysis and xtraction of information. From the above writers we can conclude that preprocess function is a correction of image deficiencies and removal of flaw for better information extraction and analysis. Different writers classified the preprocessing method differently some classified in to two by inculcating the atmospheric correction under radiometric correction. The preprocessing correction model involves the initial processing of raw image data to correct geometric distortions, to calibrate the data radiometric ally and to eliminate the noise present in the data.All pre-processing methods are considered un der three heads, namely, (i) geometric correction methods, (ii) radiometric correction methods, and (iii) atmospheric correction methods. (Reddy, 2008) And the other writers classify preprocessing method in to two by taking atmospheric correction under radiometric correction. Pre-processing operations, sometimes referred to as image restoration and rectification, are intended to correct for sensor- and platform-specific radiometric and geometric distortions of data. CCRS) Even though, different writers classify preprocessing operation differently issue rose under atmospheric correction and other type of correction methods are the same. Depending on the source of error, deficiency correction and flaw removal are divided into two categories these are: ? Radiometric ? Geometric 2. 1. Radio Metric correction Method Radiometric correction is concerned with improving the accuracy of surface spectral reflectance, emitance, or back-scattered measurements obtained using a remote sensing syst em.Detector error correction, Atmospheric and topographic corrections The primary function of remote sensing data quality evaluation is to monitor the performance of the sensors. The performance of the sensors is continuously monitored by applying radiometric correction models on digital Image data sets. The radiance measured by any given system over a given object is influenced by factors, such as, changes in scene illumination, atmospheric conditions, viewing geometry and instrument response characteristics (Lille sand and Kiefer, 2000 cited in Reddy,2008). Radiometric Correction: removal of sensor or atmospheric noise, to more accurately represent ground conditions improve image ââ¬Å¾fidelity? : In this type correction two important processes are carried out cosmetics correction and Atmospheric correction. 2. 1. 1 Cosmetics correction This operation helps to remove or correct image deficiencies created through sensor defects. According to Bakker and et al (2001), Cosmetics cor rection involves all those operation that are aimed at correcting visible errors and noise in the image data.Defects in the data may be in the form of periodic or random missing lines (line dropout), line striping and random or spike noise. Let us see each defect one by one: A. Missing Scan Lines (line dropout) Missing scan lines occur when a detector fails to operate during a scan. These results in zero brightness value in each pixel of the particular line and will appear black in the image. Line drop: Occurs due to recording problems when one of the detectors of the sensor in question either gives wrong data or stops functioning.The Landsat ETM, for example, has 16 detectors in all its bands, except the thermal band. A loss of one of the detectors would result in every sixteenth scan line being a string of zeros that would plot as a black line on the image. Figure1 Dropped lines The rectangle represents a pixel, as we observe on the right there the picture represent by different D N values but the black lines has no values which means there is zero DN values Which indicates there is failed of a detector for this line. Dropped lines occur when there are systems errors which result in missing or defective data along a scan line.Dropped lines are normally corrected by replacing the line with the pixel values in the line above or below, or with the average of the two. (Gens, 2000) The missing value is replaced by the average of the corresponding pixels on the scan lines above and below the defective line, that is: 3 vij = (vij? 1 + vij+1)/2 Taking the result to the nearest integer if the data are recorded as integer counts. Where the missing line is the first or last line of the image for estimating a missing pixel value along a dropped scan line involves its replacement by the value of the corresponding pixel on the immediately preceding scan line. Mather and Koch,2011) B. Line Striping Line striping occurred by the miss-calibration of one of the detectors on th e sensors. Line striping occurs due to non-identical detector response. Although the detectors for all satellite sensors are carefully calibrated and matched before the launch of the satellite With time the response of some detectors may drift to higher or lower levels, resulting in relatively higher or lower values along every sixth line in the image data. (Gens, 2000) Striping Figure 2 Striping Correction De- StripedLine striping is corrected using various methods look up tables (radiometric response measurements at different brightness levels), onboard calibration or histogram matching (gain and offset) ââ¬â line pattern (sometimes used in combination). (Gens, 2000) 2. 1. 2 Atmospheric correction All incident rays reflected and emitted radiance pass through in the atmosphere which exposes the radiance for atmospheric scattering or absorption such action takes place in the atmosphere leads to image distortion. Any sensor that records electromagnetic radiation from the Earth? s urface using visible or nearvisible radiation will typically register a mixture of two kinds of energy. The value recorded at any pixel location on a remotely sensed image does not represent the true ground-leaving radiance at that point. Part of the brightness is due to the reflectance of the target of interest and the remainder is derived from the brightness of the atmosphere itself. (Hadjimitsis, 2010) Similar to this idea Bakker and et al (2011) stated that, All reflected and emitted radiations leaving the earth surface are attenuated mainly due to absorption and scattering by the constituents in the atmosphere.The atmosphere induced distortions occur twice in case of sunlight reflection and once in case of emitted radiation. Their effect on remote sensing data can be reduced by applying atmospheric correction techniques. These correction related to the influence of haze, sun angle and sky light. 4 A. Haze reduction Haze affects the contrast of the image by adding the DN values which come from Mie scattering this causes for image blurredness. Aerial and satellite images often contain haze. Presence of haze reduces image contrast and makes visual examination of images difficult. Due toRayleigh scattering Particle size responsible for effect smaller than the radiation? s wavelength (e. g. oxygen and nitrogen). Haze has an additive effect resulting in higher DN values. One means of haze compensation in multispectral data is to observe the radiance recorded over target areas of zero reflectance . For examples, the reflectance of deep clear water is zero in NIR region of the spectrum. Therefore, any signal observed over such an area represents the path radiance. This value can be subtracted from all the pixels in that band Examples 1 a) before haze removal b) After haze removal 2 Figure 3 Haze Reduction 5 B) Sun angle correction According to Bakker and et al. (2011) stated that The position of the sun relative to the earth changes depending on time of the day a nd the day of the year. As a result, the image data of different seasons are acquired under different solar illumination. An absolute correction involves dividing the DN-values in the image data by the sine of the solar elevation angle. Landsat 7 ETM+ color infrared composites acquired with different sun angle. (A) The left image was acquired with a sun elevation of 37à ° and right image. B) With a sun elevation of 42à °. The difference in reflectance is clearly shown. (C) (B) The left image was corrected to meet the right image. Figure 4 Sun angle Correction 2. 2. Geometric Correction Geometric distortion is an error on image which occurred by one of the two possibilities either internally on the geometry of the sensor or externally the altitude of the sensor or the shape of the object. Supporting to this idea Kuznetsov and et al. (2012) describe that geometric distortion is an error on image, between the actual image coordinates and the ideal image coordinates.Geometric distorti on is classified in to internal distortion resulting from the geometry of the sensor and external distortion resulting from the altitude of the sensor or the shape of the object. 6 To correct such geometric distortion on the image we should use different geometric correction methods. Murayam and Dassanayake (2010) stated that geometric corrections include correcting for geometric distortions due to sensor-Earth geometry variations, and conversion of the data to real world coordinates (e. g. latitude and longitude) on the Earths surface.Conversion of the data to real world coordinates are carried by analyzing well distributed Ground Control Points (GCPs). This is done in two steps Geo-referencing: This involves the calculation of the appropriate transformation from image to terrain coordinates. Landsat 30m ETM+ Image Quickbird . 7m Natural Color Image Ground control points are identified between the two images in recognizable locations. These points should be static relative to tempo ral change. In this case road intersections are the best source of GCPââ¬â¢s. Features that move through time (i. e. horelines, etc. ) should be avoided if possible. Figure 5 Georeferencing Geocoding: This step involves resembling the image to obtain a new image in which all pixels are correctly positioned within the terrain coordinate system. Resampling is used to determine the digital values to place in the new pixel locations of the corrected output image. Figure 6 Geo coding 7 There are different techniques of resampling methods according Murayam and Dassanayake (2010) there is three techniques of resampling: 1. Nearest Neighborhood 2. Bi-linear interpolation 3. Cubic Convolution 1.Nearest Neighborhood According to Rees (2011) the nearest neighbor approach uses the value of the closest input pixel for the output pixel value. To determine the nearest neighbor, the algorithm uses the inverse of the transformation matrix to calculate the image file coordinates of the desired geo graphic coordinate. The pixel value occupying the closest image file coordinate to the estimated coordinate will be used for the output pixel value in the geo-referenced image. This means that the nearest pixel value has more influence than apart pixel. Figure7.Nearest Neighborhood ADVANTAGES: â⬠¢ Output values are the original input values. Other methods of resampling tend to average surrounding values. This may be an important consideration when discriminating between vegetation types or locating boundaries. â⬠¢ Since original data are retained, this method is recommended before classification. â⬠¢ Easy to compute and therefore fastest to use. DISADVANTAGES: â⬠¢ Produces a choppy, stair-stepped effect. The image has a rough appearance relative to the original un-rectified data. â⬠¢ Data values may be lost, while other values may be duplicated.Figure 1 shows an input file (orange) with a yellow output file superimposed. Input values closest to the center of each output cell are sent to the output file to the right. Notice that values 13 and 22 are lost while values 14 and 24 are duplicated. 8 2. Bi-linear interpolation The bilinear interpolation approach uses the weighted average of the nearest four pixels to the output pixel. Figure8. Bi-linear interpolation ADVANTAGES: â⬠¢ Stair-step effect caused by the nearest neighbor approach is reduced. Image looks smooth. DISADVANTAGES: â⬠¢ Alters original data and reduces contrast by averaging neighboring values together. Is computationally more expensive than nearest neighbor. 3. Cubic Convolution The cubic convolution approach uses the weighted average of the nearest sixteen pixels to the output pixel. The output is similar to bilinear interpolation, but the smoothing effect caused by the averaging of surrounding input pixel values is more dramatic. Figure9. Cubic Convolution ADVANTAGES: â⬠¢ Stair-step effect caused by the nearest neighbor approach is reduced. Image looks smooth. DIS ADVANTAGES: â⬠¢ Alters original data and reduces contrast by averaging neighboring values together. Is computationally more expensive than nearest neighbor or bilinear interpolation. In general image preprocessing is very essential step for better image analysis and interpretation because it corrects different types of image distortion. 9 Similar to this idea Murayam and Dassanayake (2010) stated that preprocessing includes data operation which normally precedes further manipulation and analysis of the image data to extract specific information. These operations aim to correct distorted or degraded image data to create a more faithful representation of the original scene. . Digital Image Formats and Its Arrangement According to Visual Resource Centre School of Humanities (2011) Digital images are electronic representations of images that are stored on a computer. The most important thing to understand about digital images is that you can? t see them and they don? t have any phys ical size until they are displayed on a screen or printed on paper. Until that point, they are just a collection of numbers on the computer? s hard drive that describe the individual elements of a picture and how they are arranged.These elements are called pixel and they are arranged in a grid format with each pixel containing information about its color or intensity. Most of the time Band interleaved by line (BIL), band interleaved by pixel (BIP), and band sequential (BSQ) take as image digital format but this is not true rather they are schemes for storing the actual pixel values of an image in a file. Figure 10 Digital Data Format 10 According to ESRI resource center there are three common image digital formats these are Band interleaved by line (BIL), band interleaved by pixel BIP), and band sequential (BSQ) are three common methods of organizing image data for multiband images. BIL, BIP, and BSQ are not in themselves image formats but are schemes for storing the actual pixel va lues of an image in a file. While Visual Resource Centre School of Humanities (2010) there are four main file formats for images: TIFF, JPEG, PNG and GIF. TIFF (Tagged Image File Format) Description: TIFF images are usually used for master image files. They contain image information in a lossless format (i. e. no image information is lost when images are saved) and so tend to be fairly large in size.They are therefore a good format for archiving images, but the large file size makes it an unsuitable format for use in web delivery or in presentation software, such as PowerPoint. Good for: master copies of images as all image information is retained when files are saved (lossless format). But: file sizes tend to be large due to lossless format, so TIFF files are not suitable for web delivery or inclusion in PowerPoint presentations. JPEG (Joint Photographic Experts Group) Description: This is the main format that is used for photographic-type images on the web.It is a ââ¬Å¾lossy? fo rmat: images are compressed when saved and so image information is lost each time the image is edited and saved. The benefit of compression is a reduction in file size, but the down side is that if too much compression is applied, visible artefacts such as highlighting around areas of high contrast may occur. The following images show the effects on quality and file size of differing levels of compression on the same JPEG image notice the blurring around the edges of the statue in the final image.Good for: web delivery of photographic images due to ability to compress images without too much loss of quality, therefore giving smaller file sizes than TIFF formats. But: too much compression can lead to a loss of quality so care needs to be taken with the quality setting used when saving images. GIF (Graphical Interchange Format) 11 Description: Another format encountered on the Internet, the GIF format is usually used for icons or graphics that contain a limited range of flat colors. It is a lossless format (no information is lost when saving), but as limited color capabilities and so is not suitable for displaying photographs. Good for: web delivery of icons and graphics due to small file size and lossless format. But: supports limited range of colors, so is only suitable for certain types of images. PNG (Portable network graphics) Description: PNG is a relatively new web graphics format, designed primarily to replace the GIF format for use on the Internet, and potentially rival TIFF in the long term as an archival format due to its better compression performance.Its main advantages over GIF are an improved lossless compression method and support for ââ¬Å¾true color. Although software support for the PNG format has been slow in developing, this is now beginning to change and it may become a more common format in the future. Good for: web delivery due to lossless compression technique resulting in files of small size but high quality. But: JPEG format gives be tter results for photographic images, and older web browsers and programs may not support the PNG format. 4. Purpose of image enhancement and Method of Image Enhancement 4. 1.Purpose of Image Enhancement The purpose of image enhancement is forming good contrast to visualize images in a better way in order to understand or extract the intended information from the image. Similarly Vij and singh (2008) discussed Image enhancement is a mean as the improvement of an image appearance by increasing dominance of some features or by decreasing ambiguity between different regions of the image. Image enhancement processes consist of a collection of techniques that seek to improve the visual appearance of an image or to convert the image to a form better suited for analysis by a human or machine.The other writers Shankar Ray (2011) also describe, Image enhancement is the modification of image, by changing the pixel brightness values, to improve its visual impact. Image enhancement techniques a re performed by deriving the new brightness value for a pixel either from its existing value or from the brightness values of a set of surrounding pixels. 12 4. 2. Method of Image Enhancement According to Department of US Army (2003) method of image enhancement classified in to four these are 1) Contrast enhancement 2) band ratio 3) spatial filtering and 4) principle components.The type of enhancement performed will depend on the appearance of the original scene and the goal of the interpretation. This indicate that performing all methods of enhancement for one image may not be necessary and selection of methods are vary depending on for what purpose the image is prepared or what type of information is extracted from the image. 1) Contrast enhancement-such types of enhancement mostly occur to increase the brightness of the image by changing the DN Values of the image. According to Al-amri (2011) one of the most important quality factors in satellite images comes from its contrast.Co ntrast enhancement is frequently referred to as one of the most important issues in image processing. Contrast stretching is an enhancement method performed on an image for locally adjusting each picture element value to improve the visualization of structures in both darkest and lightest portions of the image at the same time. Of course there are different techniques of image contrast enhancement like Liner contrast, histogram equalization, histogram stretch and the like but the mean idea is discussed on above even though there is slight difference performing each techniques.Before contrast enhancement Figure 11 Contrast Enhancement 13 After contrast enhancement 2) Band ratio-using contrast techniques help to enhance the images with related to brightness problems but this technique cannot solve problems like shadowiness and the like such image enhancement is takes place by using band ratio techniques. According to Department of US Army (2003) stated that Band ratio is commonly used band arithmetic method in which one spectral band is proportional with another spectral band.This simple method reduces the effect of shadowing caused by topography, highlights particular image elements, and accentuates temporal differences. 3) Spatial filtering ââ¬âthis types of enhancement is very important to avoid over exaggerated details for specific place in the image. Murayam and Dassanayake (2010) describe spatial filtering as spatial filter is designed to emphasize larger, homogeneous areas of similar tone and reduce the smaller detail in an image, this serve to smooth the appearance of an image. Low pass filters are very useful for reducing random noise.It is occasionally advantageous to reduce the detail and exaggerated particular features in an image 4) Principal components- According to Department of US Army (2003) the principal component analysis (PCA) is a technique that transform the pixel brightness values. These transformations compress the data by drawing out maximum covariance and remove correlated elements. The other writer Rees (2001) also stated that the principal components of a multiband image are the set of linear combination of the bands that are both independent of and also uncorrelated with, one another. . Purpose of image Transformation and Method of Image Transformation 5. 1. Purpose of image Transformation Image transformation is a means to re-express an image in a different manner which means it gives a chance to Cooke in good way. According to UNEP (2005) The Term: Transform means arithmetic operator It is all arithmetic operations that allow the generation of a new composite image from 1 or 2 or more bands of a multi-spectral, multi-temporal, multi-frequencies (wavelengths), multi-polarization, multi-incidence angle images.The resulting image may have properties which makes it suitable to particular purpose than the original. 14 1) New Information extraction from the exited data like Change detection, vegetation info, ge ological info 2. Data dimensionality reduction storage efficiency processing efficiency reduce the # of bands and reduce time 3. Produce more physically relevant spectral feature space Similarly Mather and Koch (2011) discussed an image transform is an operation that reexpresses in a different, and possibly more meaningful, form all or part of the information content of a multispectral or grey scale image.From the above idea we can understand that by applying image transformation with different transformation techniques we can extract new information with best visualization and minimum storage. 5. 2. Method of Image Transformation Different writers classified method of image transformation differently because of their purpose study for this paper I choose UNEP (2005) Method of image transformation. According to UNEP(2005) Method Image transformation can be classified into 6 these are 1. Simple Arithmetic Operations 2. Empirically-Based Image Transformation 3.Principal Component Ana lysis 4. Multiple Discriminant Analysis 5. Hue, Saturation and Intensity (HIS) 6. Fourier Transformation 1. Simple Arithmetic Operations Applying one of arithmetic operation among Addition, Subtraction, Multiplication, and Division to perform simple transformation. They performed on 2 or more co-registered images of the same geographical area. The images could be separate spectral bands from single MSS or TM data set or they may be individual band s from data sets that have been imaged at different dates. 1. Image Addition If multiple images of a given region are available for approximately the same date and if part of one of the images has some noise (spectral problem, haze, fog, cloud), then that part can be compensated from the other images available. 15 1. 2 Image Subtraction: To assess the degree of change in an area, two dates of coo-registered images can be used with subtraction operation. October 1988 Figure 12. Change Detection 1. 3 Image Multiplication: May 1992 If the ana lyst is interested in a part of an image, then extracting that area can be done by multiplying the area by 1 and the rest by 0.This applied only when the boundary of the area of interest is irregular. 1. 4 Image Division: Image Ratio: Dividing the pixels in one image by the corresponding pixels in a second image. Most commonly used transformation. It is very important transformation techniques because ? ? Certain aspect of the shape of spectral reflectance curves of the different earth surface cover types can be brought out by ratio. Undesirable effects on the recorded images such as the effect of variable illumination resulting from variation in topography can be reduced by ratio. . Empirically-Based Image Transformation Experience with Landsat MSS data for agriculture areas and with the difficulties encountered in the use of ratio transformed and Principal Component, led to the development of Image Transformation that was based on the OBSERVATIONS: 16 2. 1 Perpendicular Vegetation Index PVI A plot of reflectance measured in the visible red band against reflectance in the Near IR for a partly vegetated area will results in a plot like and use soil line for calculating vegetation distance from the line two-dimension space. . 2 Tasseled Cap Transformation PVI considers spectral variation in two of the 4 Landsat MSS bands and use distance from a soil line in the two-dimension space defined by these two MSS bands a a measure of biomass of green leaf area. 3. Principal Component Analysis PCA Adjacent bands in Multi-spectral Scanner remotely sensed data (images) are generally correlated. Multi-band visible/NIR images of vegetated areas show negative correlation between NIR and visible red bands, and positive correlation among the visible bands Green and Red.This is because of the spectral characteristics of vegetation are such that as vigor or greenness of the vegetation increase the red reflectance diminishes and NIR increases. The presence of correlation among th e bands of the optical reflected MSS images implies that there is REDUNDENCY in the data. This means that some information is being repeated. It is the repetition of the information between bands that is reflected in the correlation. Principal component analysis helps to remove such redundancy through compress the data by drawing out maximum covariance and remove correlated elements. 4.Multiple Discriminant Analysis Image transformation using linear function called discriminant function. It represents the coordinate axes in the dimensional space that defined by the spectral bands which making up the data. As in PCA the relationship between the spectral bands and the discriminant functions axis derived and the coordinates of the individual pixel vector computed in terms of discriminant function. A simple example: if you have two groups of land with special reflectance that can be discriminated on the basis of the measurement in the dimensional space or in the coordinate axis.Some sci entist thinks that this transformation made for special assignment. But despite of that it is found very useful it those special cases where you cannot find solution for them unless using this transformation. 5. Hue, Saturation and Intensity (HIS) Hue: angular variable of the direction of colors Saturation: lightness of the color (toward white) 0-255 scale the amount of white in the color Intensity: color strength 17 I = R+G+B H= (G ââ¬â B)/I 3B S= (I ââ¬â 3B)/ I 6. Fourier Transformation All five transformations discussed, they were using multidimensional space (multi-band) of remotely sensed data.Fourier Transformation using single band. The main idea of this transformation is that it uses the gray scale value, which forming a single image or single band, can be viewed as 3-D surface. The raw and column (X,Y) or spatial coordinates defining two axis (X,Y) and the gray scale 0255 value at each pixel giving the 3rd dimension. Therefore, the resulting image or product will s how the frequency of certain feature all around the image. So it is a kind of histogram of the image in 3-D. 6. Vegetation index and relation with vegetation degradation 6. What is Vegetation Index? According to Jackson and huete (1991) Vegetation index is calculating of spectral band of data by combining two or more spectral bands of data. Vegetation indices are formed from combinations of several spectral values that are mathematically recombined in such a way as to yield a single value indicating the amount or vigor of vegetation within a pixel . Campbell, (1996) cited in Freitas and et. al (2005). 6. 2 Vegetation index and degradation ââ¬âthe best method of vegetation index is NDVI which is a normalized vegetation index .It a good means to assess the amount of greenness an area in th inverse NDVI indicates the level of degradation of an area. For example Take Bahirdar image in 1990 winter season and calculate the NDVI value and get a result of 0. 7 and after 10 years in2000 take another image of the same season and calculate the NDVI value and get a result of 0. 2. These indicates that in 1990 Bahirdar was covered by green vegetation while the 2000 image show that most of the areas covered by vegetation is degraded and covered by rocks.If the NDVI value approaches to1 the area has good vegetation cover, if the NDVI value approaches to 0 the area has less vegetation which means the area is covered by rocks and if the NDVI value is negative the area has no vegetation rather the area is covered by snow. 18 7 Digital image classification 7. 1 What is Digital Image
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.