The amount and distribution of gaps in vegetation canopy is a useful indicator of multiple ecosystem processes and functions. In this paper, we describe a semiautomated approach for estimating canopy-gap size distributions in rangelands from high-resolution (HR) digital images using image interpretation by observers and statistical image classification techniques. We considered two different classification methods (maximum-likelihood classification and logistic regression) and both pixel-based and object-based approaches to estimate canopy-gap size distributions from 2- to 3-cm resolution UltraCamX color infrared aerial photographs for arid and semiarid shrub sites in Idaho, Nevada, and New Mexico. We compare our image-based estimates to field-based measurements for the study sites. Generally, percent of input points correctly classified and kappa coefficients of agreement for plot image classifications was very high. Plots with low kappa values yielded canopy gap estimates that were very different from field-based estimates. We found a strong relationship (R2 > 0.9 for all four methods evaluated) between image- and field-based estimates of the total percent of the plot in canopy gaps greater than 50 cm for plots with a classification kappa of greater than 0.5. Performance of the remote sensing techniques varied for small canopy gaps (25 to 50 cm) but were very similar for moderate (50 to 200 cm) and large (> 200 cm) canopy gaps. Our results demonstrate that canopy-gap size distributions can be reliably estimated from HR imagery in a variety of plant community types. Additionally, we suggest that classification goodness-of-fit measures are a potentially useful tool for identifying and screening out plots where precision of estimates from imagery may be low. We conclude that classification of HR imagery based on observer-interpreted training points and image classification is a viable technique for estimating canopy gap size distributions. Our results are consistent with other research that has looked at the ability to derive monitoring indicators from HR imagery.
You have requested a machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Neither BioOne nor the owners and publishers of the content make, and they explicitly disclaim, any express or implied representations or warranties of any kind, including, without limitation, representations and warranties as to the functionality of the translation feature or the accuracy or completeness of the translations.
Translations are not retained in our system. Your use of this feature and the translations is subject to all use restrictions contained in the Terms and Conditions of Use of the BioOne website.
Vol. 65 • No. 2