Journal list menu

Volume 13, Issue 8 e4206
ARTICLE
Open Access

Democratizing macroecology: Integrating unoccupied aerial systems with the National Ecological Observatory Network

Michael J. Koontz

Corresponding Author

Michael J. Koontz

Earth Lab, University of Colorado, Boulder, Colorado, USA

Correspondence

Michael J. Koontz

Email: [email protected]

Search for more papers by this author
Victoria M. Scholl

Victoria M. Scholl

Earth Lab, University of Colorado, Boulder, Colorado, USA

Department of Geography, University of Colorado, Boulder, Colorado, USA

U.S. Geological Survey, Geosciences and Environmental Change Science Center, Lakewood, Colorado, USA

Search for more papers by this author
Anna I. Spiers

Anna I. Spiers

Earth Lab, University of Colorado, Boulder, Colorado, USA

Department of Ecology and Evolutionary Biology, University of Colorado, Boulder, Colorado, USA

Search for more papers by this author
Megan E. Cattau

Megan E. Cattau

Department of Human-Environment Systems, Boise State University, Boise, Idaho, USA

Search for more papers by this author
John Adler

John Adler

Department of Geography, University of Colorado, Boulder, Colorado, USA

Battelle, National Ecological Observatory Network (NEON), Boulder, Colorado, USA

Search for more papers by this author
Joseph McGlinchy

Joseph McGlinchy

Earth Lab, University of Colorado, Boulder, Colorado, USA

Hydrosat Inc., Washington, D.C., USA

Search for more papers by this author
Tristan Goulden

Tristan Goulden

Battelle, National Ecological Observatory Network (NEON), Boulder, Colorado, USA

Search for more papers by this author
Brett A. Melbourne

Brett A. Melbourne

Department of Ecology and Evolutionary Biology, University of Colorado, Boulder, Colorado, USA

Search for more papers by this author
Jennifer K. Balch

Jennifer K. Balch

Earth Lab, University of Colorado, Boulder, Colorado, USA

Department of Geography, University of Colorado, Boulder, Colorado, USA

Search for more papers by this author
First published: 24 August 2022
Handling Editor: R. Chelsea Nagy
Funding information Cooperative Institute for Research in Environmental Sciences; National Aeronautics and Space Administration, Grant/Award Number: 80NSSC18K0750; National Science Foundation, Grant/Award Numbers: 1846384, 1906144, 2017889; University of Colorado Boulder Grand Challenge Initiative; US Geological Survey North Central Climate Adaptation Science Center; USDA Forest Service Western Wildland Environmental Threat Assessment Center

Abstract

Macroecology research seeks to understand ecological phenomena with causes and consequences that accumulate, interact, and emerge across scales spanning several orders of magnitude. Broad-extent, fine-grain information (i.e., high spatial resolution data over large areas) is needed to adequately capture these cross-scale phenomena, but these data have historically been costly to acquire and process. Unoccupied aerial systems (UAS or drones carrying a sensor payload) and the National Ecological Observatory Network (NEON) make the broad-extent, fine-grain observational domain more accessible to researchers by lowering costs and reducing the need for highly specialized equipment. Integration of these tools can further democratize macroecological research, as their strengths and weaknesses are complementary. However, using these tools for macroecology can be challenging because mental models are lacking, thus requiring large up-front investments in time, energy, and creativity to become proficient. This challenge inspired a working group of UAS-using academic ecologists, NEON professionals, imaging scientists, remote sensing specialists, and aeronautical engineers at the 2019 NEON Science Summit in Boulder, Colorado, to synthesize current knowledge on how to use UAS with NEON in a mental model for an intended audience of ecologists new to these tools. Specifically, we provide (1) a collection of core principles for collecting high-quality UAS data for NEON integration and (2) a case study illustrating a sample workflow for processing UAS data into meaningful ecological information and integrating it with NEON data collected on the ground—with the Terrestrial Observation System—and remotely—from the Airborne Observation Platform. With this mental model, we advance the democratization of macroecology by making a key observational domain—the broad-extent, fine-grain domain—more accessible via NEON/UAS integration.

INTRODUCTION

Macroecology is the study of spatially extensive systems whose biological, geophysical, and social components interact dynamically both within and across spatiotemporal scales (Heffernan et al., 2014). Macroecology, in its explicit consideration of scale, extends from a rich history of basic ecological research seeking to explain patterns in nature (Levin, 1992; Turner, 1989). At the same time, macroecology is highly relevant to applied ecology, as the broader spatial extents studied reflect the scale at which many societally relevant challenges, and perhaps their solutions, arise (Heffernan et al., 2014; LaRue et al., 2021). The causes and consequences of phenomena under investigation in macroecology can span many spatial scales, which motivates a characteristic feature of the data to be brought to bear: They must often be simultaneously fine in grain (i.e., spatial resolution) and broad in extent (i.e., area covered) (Beck et al., 2012).

Ecologists typically face a data collection trade-off between grain and extent that constrains the observational domain of their research (Ernest, 2018; Estes et al., 2018). Indeed, the spatial and temporal observational domains of most ecology research are narrow (Estes et al., 2018). The grain/extent trade-off can sometimes be overcome, but at a high cost. For example, the Global Airborne Observatory collects high spatial and spectral resolution data at broad extents (Asner et al., 2007, 2012), but the price of data acquisition and processing tallies in the millions of US dollars (USD), even though the per-area cost is low (Asner et al., 2013). As another example, the US Forest Service Forest Inventory and Analysis program maintains a regular network of over 350,000 fine-grain field plots regularly spaced over the entire forested area of the United States (over 9.1 million km2; approximately 1 plot every 2400 ha) at an annual cost of tens of millions of dollars (Alvarez, 2020; Gillespie, 1999). Most science studies have relatively modest budgets and are conducted by just a few individuals (Heidorn, 2008). The modal award size from the National Science Foundation's (NSF) Division of Environmental Biology was about 200,000 USD between 2005 and 2010 (Hampton et al., 2013). While the fine-grain, broad-extent observational domain is invaluable for macroecology, it can be inaccessible to ecologists with resource or funding limitations.

Macroecology can be democratized when barriers to research participation are reduced (Guston, 2004), such as by lowering the cost of, or innovating past limitations to, access to relevant scales of observation. Removing these barriers improves science because the rate, direction, and quality of science are, in part, shaped by the available research inputs (Nagaraj et al., 2020). For instance, the cost of imagery from the archive of Landsat Earth observation imagery was reduced in 1995 and restrictions on sharing were relaxed, which dramatically increased the quantity, quality, and diversity of Landsat-enabled science (Nagaraj et al., 2020). The same archive became freely available in 2008 with concomitant benefits to projects that rely on Landsat observations (e.g., Picotte et al., 2020). The changing accessibility of Landsat data is noteworthy for macroecology, as the archive provides consistent global-extent, relatively fine-grain (30 m) imagery since 1984. Particularly when integrated with other tools whose purpose is to broaden research participation (such as the free, planetary-scale geographic information system “for everyone,” Google Earth Engine; Gorelick et al., 2017), Landsat imagery has led to breakthrough science that is “globally consistent and locally relevant” such as the first global map of forest cover changes over a decade-long period at a relatively fine scale (Hansen et al., 2013). Thus, democratized research stimulates revolutionary science.

In its democratic aim, the National Ecological Observatory Network (NEON) is revolutionary (Balch et al., 2020; NSF, 2013). NEON is a continental-scale observation facility in the United States comprising 81 sites within 20 ecoclimatically distinct domains and an operational lifespan on the order of decades (Keller et al., 2008; Schimel, 2013). NEON is designed to collect rigorous, consistent, long-term, and open access data to better understand how US ecosystems are changing, using a combination of field measurements obtained by trained personnel, ground- and aquatic-based automated sensors, and plane-based instruments that collect both active and passive remotely sensed data (Kampe et al., 2010; NSF, 2013). NEON observations span spatial scales, from measurements of individual organisms within small field plots to 10-cm resolution red–green–blue (RGB) imagery, 1-m imaging spectroscopy, and lidar (light detection and ranging) point clouds across hundreds of square kilometers, with measurements replicated across sites that span the continental extent of NEON (Keller et al., 2008; Musinsky et al., 2022). A stated goal of NEON is to democratize access to ecological research, particularly at broad extents (NSF, 2013)—its promise is continental-scale ecology for everyone. NEON pairs publicly available data with a strong outreach and education effort to help realize this promise. In this way, NEON broadens access to macroecology by reducing barriers to entry, particularly cost, fieldwork requirements, and technical expertise (Nagy et al., 2021). An “instrument” such as NEON collecting standardized data at such scales leads to inevitable trade-offs—in the specific times, locations, and type of data that are sampled. While the NEON data are on their own sufficient for advancing ecology, part of what makes NEON revolutionary is its foresight in facilitating connections to other ecological data. In this way, the fundamental limitations of NEON can be overcome with bridges to more targeted ecological studies.

UAS can also revolutionize ecology (Anderson & Gaston, 2013). UAS, comprising a vehicle and a payload, are increasingly being used to collect high spatial resolution information over relatively large spatial extents for ecological science applications (Wyngaard et al., 2019). The vehicle is also known as a “drone” or a “UAV” standing for “unoccupied aerial vehicle,” “unhumanned aerial vehicle,” “uncrewed aerial vehicle,” or “unmanned aerial vehicle,” though we support phasing out the gendered language of this last expansion (Joyce et al., 2021). The payload is the instrumentation carried by the vehicle beyond what is critical for flight operations, and gives the UAS its scientific value. Importantly, it is not the vehicle itself that enables ecological studies at heretofore inaccessible scales, but rather the vehicle's ability to position a data collecting payload (i.e., a sensor) in a repeatable, efficient, hard-to-reach manner. For example, one use case for UAS is structure-from-motion (SfM) photogrammetry, which generates a three-dimensional model of an area of interest using two-dimensional images from multiple overlapping viewing angles (Westoby et al., 2012). The minimum requirement for SfM photogrammetry is two-dimensional imagery, which can be captured from the ground using a handheld sensor (e.g., a digital camera) to great effect for some applications (Piermattei et al., 2019). A UAS-based camera can capture imagery from higher up in, or above, the canopy, which allows for measurement of higher vegetation strata (Kuželka & Surový, 2018), including total height for above-canopy applications. UAS-based SfM photogrammetry also increases the extent that can be covered with surveys (Jackson et al., 2020) because aerial transects are unimpeded by varied terrain and vegetation encountered on ground transects. Unimpeded aerial transects are also more reliably repeated than ground surveys that require navigating through vegetation and are likely to be less impactful to that vegetation. UAS provide an avenue to flexibly and affordably fill spatiotemporal gaps in data collected by traditional means—they can be deployed more frequently and capture finer grain data than airplane- and satellite-based platforms, and can cover greater extents than ground surveys.

UAS and NEON complement each other. Each can be a key tool for macroecology research, but their integration offers an opportunity to alleviate some of their fundamental constraints in a similar way as an integration of NEON with other Earth-observing networks (Balch et al., 2020; Nagy et al., 2021). NEON data derive from “state-of-the-science” instrumentation with thorough documentation and are standardized at a continental scale. NEON data collection is not only preplanned, which makes the resulting data somewhat predictable, but also rigid in space, time, and type. On the contrary, UAS operations are nimble and customizable, but the resulting data are relatively under-validated with data standards that are ad hoc, idiosyncratic, and lacking in consistency, which makes interoperability of those data across projects a challenge (Wyngaard et al., 2019). Realization of the benefits of UAS–NEON integration by ecologists is dually challenged by the relative novelty of these tools (Nagy et al., 2021; Wyngaard et al., 2019), as well as by a community gap in the data science skills needed to navigate their associated workflows (Balch et al., 2020; Hampton et al., 2017; Nagy et al., 2021). Not knowing where to start with two new tools is a daunting proposition, and unstructured efforts to gain practical proficiency for research often come at the expense of doing research itself (Olah & Carter, 2017). Reducing these barriers to proficiency therefore has tremendous research value.

Mental models help novices become experienced practitioners by providing a contextual framework for new knowledge (Knapp & D'Avanzo, 2010). A lack of a synthesized contextual framework for the practical use of UAS for ecology research, particularly for NEON integration, challenges the adoption of these tools and hampers their ability to democratize macroecology (Assmann et al., 2019; Wyngaard et al., 2019). We assembled a working group of participants at the 2019 NEON Science Summit in Boulder, Colorado, with a goal to synthesize current practical knowledge and provide a sample workflow to guide ecologists with a mental model for using UAS and integrating with NEON. In this work, we aim to lower the barrier to entry for using UAS and NEON to do ecology. Specifically, we focus on optical data collected by each tool over terrestrial sites and provide (1) a collection of what we consider to be the 10 core principles for integrating UAS with NEON (science requirements, vehicle, payload, environment, flight planning, rules/regulations, radiometric calibration, georeferencing, data management, and data processing) and (2) an illustration of these principles with a real-world, well-documented workflow that processes UAS data into meaningful ecological information, then integrates it with NEON Airborne Observation Platform (AOP) and Terrestrial Observation System (TOS) data at the NEON Niwot Ridge (NIWO) site.

CORE PRINCIPLES FOR UAS/NEON INTEGRATION

Science requirements

We support and extend one of Assmann et al.'s (2019) themes regarding research use of UAS in order to highlight the first core principle for integrating UAS with NEON: knowing what the science requirements are for the data to be collected and what data collection efforts are “good enough” to meet those requirements. Using NEON to advance ecology is a type of data-driven discovery, in which the high-quality, but rote, data collection occurs before the science questions are generated (Lindenmayer & Likens, 2018). UAS data collection can be more flexible and responsive, which makes it more suitable for discovery driven by particular questions posed ahead of time. Integration of UAS and NEON could therefore be considered a hybrid between data- and question-driven discovery, where there is a dynamic between creative use of the existing NEON data, generation of new specific questions, and augmentation of the existing NEON data with UAS data collection to help answer those questions. During this process, a clear science question helps guide the data collection/collation needs, which can minimize the amount of researcher energy spent on developing tools and workflows that ultimately prove to be superfluous (Mahood et al., 2022).

Vehicle

The vehicle in a UAS is the flying machine that holds the payload. One key distinction between vehicle types is whether rotor systems or fixed wings are used for lift (the upward force that keeps the vehicle in the air). Rotocopter vehicles (also known as “multicopters,” “multirotors,” “quadcopters,” “hexacopters,” or “octocopters” depending on the number of rotor systems) consist of a body and (usually) four to eight rotary systems that provide both lift and thrust (horizontal motion). These types of vehicles are characterized as “vertical takeoff and landing” (VTOL). Fixed-wing aircraft use wings for lift and use rotor systems only for thrust. Hybrid vehicles use rotor systems for lift during ascent and descent but fixed wings for lift during the flight, and are sometimes referred to as VTOL fixed-wing systems to highlight this combination of features. The structure and size of the vehicle determine its functionality in the field, and thus, a project's objectives can often help constrain the choices available. Rotocopter platforms are more maneuverable, often less expensive, easier to fly, and more transportable, and have a higher payload capacity relative to fixed-wing aircraft. For these reasons, rotocopters are often preferred by ecologists. On the contrary, fixed-wing aircraft have longer flight times with better battery usage and thus can cover larger areas more efficiently than rotocopters. For example, covering the full extent of a given AOP footprint (147.6 ± 107.2 km2 for core and relocatable sites) may be most efficiently conducted with a fixed-wing or hybrid vehicle. They are also more stable in adverse conditions (e.g., high winds) and have a safer recovery from motor power loss. VTOL fixed-wing systems can combine the efficiency of a fixed wing with the small takeoff/landing footprint of a rotocopter. A summary of the advantages and disadvantages of these vehicle types is found in Table 1.

TABLE 1. Summary of vehicle and payload considerations for unoccupied aerial system-enabled ecology.
Consideration Options Advantages Disadvantages References
Vehicle Rotocopter Ease of takeoff and landing Shorter flight time (~20 min) Anderson and Gaston (2013), Goodbody et al. (2017), Pádua et al. (2017)
Hover capability
Maneuverability
Affordable
Fixed wing Longer flight time (2+ h) Higher minimum flight speed to keep it aloft (affecting overlap, image quality)
Covers large spatial extent Complex takeoff/landing
More stable in wind
Vertical takeoff and landing fixed-wing hybrid Simpler takeoff/landing Newer technology
Longer flight time Expensive
Payload Red–green–blue camera Small size Limited spectral extent to visible wavelengths Pádua et al. (2017), Adão et al. (2017)
Affordable Spectrally overlapping, imprecise spectral information
Fine spatial resolution
Multispectral sensor Small size Limited spectral sampling typically in visible and infrared wavelengths
More precise spectral information More complex data acquisition and post-processing
Hyperspectral/imaging spectrometer High spectral resolution Heavy
High spectral extent Expensive
Very complex data acquisition and post-processing
  • Note: “…” indicates the content is repeated from the cell above.

A flat surface clear of obstructions (e.g., on dirt rather than grass, away from forest canopy) is ideal for UAS takeoffs and landings. VTOL systems require a smaller takeoff and landing footprint, which may be satisfied with only a small canopy gap, compared with vehicles that use fixed wings for lift, which require a “runway” for takeoff. Locating a suitable takeoff area may be challenging at some NEON field sites (e.g., NIWO, with dense canopy cover) and easy at others (e.g., San Joaquin Experimental Range, with an open woodland ecotype). Takeoffs and landings from a clean, stable, flat surface (e.g., plywood and car floor mat) will prevent dirt from obstructing or scratching the sensor lens and will make for a more controlled ascent/descent.

With any platform, vehicle endurance limitations and the mission goals will determine how many flights are required to complete data collection. In many cases, it will be necessary to use several batteries to keep the vehicle flying for the duration of a field day. Even if only one flight is needed to collect data, extra batteries are still valuable to have on hand in case the first flight does not go as planned and follow-up flights are required. Batteries from many vehicle manufacturers (e.g., Da-Jiang Innovations [DJI]) will automatically discharge after a period of nonuse as a safety feature, so it is good practice to wait to charge all batteries until near the time they will be used in order to ensure that they will be at their peak capacity when they are needed (e.g., do not charge them until the night before you need them). An energy source to charge batteries in the field, like a solar charger or gasoline-powered generator, may also be necessary for very long missions or multiple days of data collection. As a guideline, you can determine how many batteries your portable energy source can charge by determining its energy capacity in watt-hours (Wh), multiplying by 90% (making the calculation such that you leave 10% of the energy source's capacity rather than fully draining it), then dividing by the capacity of a single UAS battery in Wh and rounding down to the nearest whole number to account for any unpredictable inefficiencies. For instance, a Goal Zero Yeti 1400 battery (used successfully by some authors) can be charged with solar panels and stores 1400 Wh of energy, which results in 1260 Wh of usable energy if it were to be drained to 10% capacity. Each battery of the DJI Phantom 4 Pro aircraft (a common choice for mapping) stores 89.2 Wh of energy, so the Yeti 1400 should be able to charge about 14 batteries before it needs to be recharged itself (1260/89.2 = 14.13, which is 14 when rounded down). If an efficient 1-gallon (3.79-liter) gasoline-powered generator can produce 6000 Wh of energy, that results in 5400 Wh of usable energy, which is equivalent to charging about 60 batteries for the DJI Phantom 4 Pro.

Payload

The payload is the equipment carried by the UAS that collects data and combines with the vehicle to constitute the system (the “S” in “UAS”). In fact, despite the spotlight often being on the drone vehicle, the payload component is at least as important, since the main purpose of the vehicle is merely to position the payload where it needs to be in order to capture appropriate data. For ecologists interested in optical data, the payload may be a simple camera or a more specialized remote sensing sensor sensitive to particular wavelengths of electromagnetic radiation. The scientific questions will dictate the data requirements, which will in turn drive the payload decision. Typically, the selection of a sensor represents a trade-off between spatial resolution (the size of pixels in the imagery at a set altitude), spectral resolution (the number of distinct portions of the electromagnetic spectrum that the sensor can detect), spectral extent (how much of the electromagnetic spectrum the sensor can detect), and cost. For example, while imaging spectroscopy provides high spectral resolution and extent that may allow measurement of specific chemical compounds in vegetation (e.g., foliar nitrogen; Knyazikhin et al., 2013), a multispectral instrument with fewer spectral channels (Koontz et al., 2021) or even an RGB camera (Scholl et al., 2020) may be more than sufficient for classifying vegetation to species. Similarly, sensors with high spatial resolution can capture fine detail in their imagery but may reduce the ability to measure a variable of interest, such as individual trees, as post-processing steps can be negatively affected by the movement of those fine details in the wind (Young et al., 2022). Hyperspectral instruments and high-resolution cameras are relatively expensive in terms of purchase cost, post-processing time, and data storage requirements, but simple RGB and multispectral cameras can be affordably bought off the shelf, so it is worth considering whether they would suffice for the scientific question of interest. A summary of the advantages and disadvantages of these different payload types for collecting optical data can be found in Table 1.

It is also important to consider how the payload will be integrated with the vehicle, which generally requires considering the combination of the vehicle and payload simultaneously. In some cases, the payload can operate entirely independently from the vehicle, and integration only requires a means of physically attaching the components together. In other cases, the payload relies both on power and on electronic signaling from the vehicle in order to capture data, and integration may require more specialized electrical and mechanical engineering expertise. It is generally advisable to use a prebuilt integration kit or an already-integrated sensor/vehicle system if the payload meets the science requirements (or nearly so).

Environment

The environment of the UAS mission can affect both the equipment performance and the data collection such that the intended operational conditions must be considered during vehicle/payload selection and flight planning. Foremost, the vehicle and the payload must be capable of functioning in the desired environment. UAS flights at high elevations or in cold weather will drain the battery faster than at sea level, and some popular vehicles will not allow takeoff if the temperature is too cold (or hot). While some vehicles are designed to withstand light precipitation and dust, many would be damaged under such flight conditions. Heavy winds can push the UAS off course or require the UAS to work harder to maintain its course, which drains the battery faster and reduces endurance. Variable terrain within the survey area may also affect vehicle endurance, as more energy is required to ascend and descend while also traversing along flight transects in the horizontal plane. Managing the temperature of the mission critical electronics is just as important as that of the vehicle's batteries during UAS operations. The vehicle remote controller and any other peripherals such as a tablet computer are susceptible to battery drain in extreme temperatures, and cold temperature can cause the vehicle and/or sensor to malfunction. The NEON field sites exhibit a wide range of conditions that can impact UAS operations. For instance, the mean annual temperature for NEON AOP sites ranges from −12°C at the Utqiaġvik site in Alaska to 25°C at Lajas Experimental Station in Puerto Rico (NEON Field Site Metadata; https://www.neonscience.org/sites/default/files/NEON_Field_Site_Metadata_20210226_0.csv; accessed 16 March 2021). Expectations of unfavorable environmental conditions may be enough to dictate what equipment should comprise the UAS. For example, high-wind conditions at NIWO may warrant a fixed-wing platform; however, the dense forest would make takeoff and landing much easier with a rotocopter. In some cases, steps can be taken to mitigate the unfavorable environmental conditions, such as keeping equipment out of direct sunlight to prevent overheating (to the point of adding sun umbrellas or shade tarps to the required equipment list) and storing batteries in a dry cooler when not in use in order to insulate them against temperature extremes.

Environmental conditions may also impact data collection on automated flights, particularly for optical data. Ideal conditions for optical data collection are evenly lit with either complete cloud cover or clear skies. If flying takes place under clear sky conditions, then the sun should be high in the sky, so it does not cast long shadows—ideally within a couple of hours of solar noon (i.e., 10:00 AM and 2:00 PM for standard time, and 11:00 AM to 3:00 PM for regions that observe daylight saving time) (Assmann et al., 2019). Note that some SfM software guidelines specifically suggest not flying near solar noon, as this can create particularly bright areas within each image that challenges the SfM algorithms (MapsMadeEasy; https://www.mapsmadeeasy.com/data_collection; accessed 19 November 2021).

Prior to flights, it is important to ensure that the weather will be favorable for data collection. A handheld instrument for measuring temperature, relative humidity, and wind speed may also aid in the reporting of flight conditions, though note that the wind speed at flight altitude may be different than what is measured on the ground. In many cases, taking a picture of the sky and a screenshot of the weather forecast from a reputable source (e.g., the National Oceanic and Atmospheric Administration) is a convenient and sufficient way to ensure later reporting on flight conditions. In fact, the NEON AOP does exactly this for their daily flight reports.

Flight planning

One of the key benefits of UAS operations is the ability to program missions to be automatically followed by the vehicle's onboard flight software. For optical data collection such as that required for SfM photogrammetry, the mission typically involves aerial transects with images captured at regular time or distance intervals so that objects in a scene are imaged from many viewing angles (often in excess of 100; Figure 1). Successful flight planning requires consideration of the flight parameters, flight planning software, and operational routine.

Details are in the caption following the image
Black points depict the unoccupied aerial system position for each photograph captured during the flight. Red points in the “X” formation at the center are the high-precision geolocations of the National Ecological Observatory Network vegetation plot monuments. The background color represents the approximate number of photographs captured over each point in the surveyed area based on idealized image footprints projected on the ground surrounding the geolocation of each photograph point (i.e., the black points). Each part of the survey area needs to be imaged a large number of times (likely more than 100 for denser vegetation), which means that some areas at the edges of the flown area will not have coverage suitable for structure-from-motion data processing. The flight area should therefore be larger than the area of interest to ensure sufficient data coverage.

The flight parameters are crucial determinants of whether or not the SfM photogrammetry will successfully create a digital model of the survey area. Flight parameters are typically described in terms of the front and side overlap of the resulting imagery, as well as the sensor angle. The front overlap is a function of flight speed, flight altitude, frequency of image capture, and the vertical field of view of the sensor, while the side overlap is a function of flight altitude, horizontal field of view of the sensor, and distance between transects. Overlap in excess of 80% for both front and side overlap (Dandois et al., 2015) and even as high as 95% front overlap (Frey et al., 2018; Torres-Sánchez et al., 2018) is required for successful photogrammetric reconstructions of more complex vegetation (such as denser forests) using commonly available processing software. Lower overlap may be sufficient for two-dimensional mapping quality, though the processed product may not penetrate deeply into canopy gaps (Dandois et al., 2015), and image artifacts such as “leaning” objects, which were only imaged from an oblique angle, are more prevalent. Additional overlap can be achieved by augmenting parallel transects with a second set of parallel transects rotated 90° to the first (a crosshatch pattern; Figure 1). Additional viewing angles can be achieved by tilting the sensor off nadir in order to capture oblique imagery, which can aid in scene reconstruction (Cunliffe et al., 2016; James & Robson, 2014). Published work exists that determines optimal flight parameters for creating digital representations of specific survey areas (Dandois & Ellis, 2013; Díaz et al., 2020; Frey et al., 2018; Nesbit & Hugenholtz, 2019; Ni et al., 2018; Swayze et al., 2021; Torres-Sánchez et al., 2018; Young et al., 2022), but it still may require some trial and error to optimize parameters for a new study area or system.

Flight planning is typically achieved using specialized software, sometimes run on a separate device such as a tablet computer. Most flight software allows for setting the altitude and the desired forward and side overlap for a given aircraft and sensor. Two other important software features that may routinely be relevant for ecology are terrain following and Internet-free operations. Terrain following enables the vehicle to ascend and descend to match topographic changes within the area of interest, such that approximately the same altitude above ground level (AGL) is maintained throughout all aerial transects. This serves two key functions: It ensures the safety of the vehicle, and it maintains approximately the same ground sampling distance for imagery, which aids in processing. Some missions are most easily created once in the field in order to incorporate better information on the area of interest, takeoff/landing locations, and visibility throughout the flight. An ability for the software to function offline and to cache background map imagery can be critical for realworld UAS use. The flight software is resource-intensive and generally requires a computer or tablet with relatively high computing power. We have experienced flight software freezing mid-flight due to computing resource overload when using tablets that were not up to the task, which can create a hazardous situation. It is likely worth investing in a device with faster processors and/or more random access memory (RAM). Finally, some flight software packages provide additional functionality if the tablet has geolocation services—an ability to determine its location on the Earth by connecting with satellite networks. For instance, the flight software may display the tablet's location on the background map during the flight or even update the “home point” location for the UAS during the mission as the pilot moves around. The home point is the location to which the UAS returns and lands after a mission is completed, a battery is depleted, or the pilot triggers a manual “return to home” command. An updating home point might allow the pilot to traverse the landscape to stay closer to the UAS, thereby better maintaining a visual line of sight or allowing the UAS to collect more data per flight since the travel distance to the landing point is minimized (during which time data typically are not collected). Not all tablets have geolocation services; as of this writing, the Cellular+Wi-Fi version of the Apple iPad Pro has geolocation services, but the Wi-Fi-only version does not.

A final consideration for successful flight planning is to create a routine for consistently executing missions. Consistent repetition of routine steps prior to, during, and after a flight ensures that all components of the UAS work as intended in concert with each other, and checklists facilitate this consistency (Degani & Wiener, 1993). We highly recommend developing and using some kind of checklist for UAS operations (Appendix S1)—there is good reason they are part of standard operations for a range of aviators from pilots of small private aircraft to NEON AOP to NASA astronauts! Some applications (such as Kittyhawk; https://kittyhawk.io/) allow for automatic logging of checklist run-throughs, which further reduces barriers to their use.

Regulations

In the United States, research use of UAS must comply with legal regulations that govern flight operations. These restrictions have historically been cited as a hurdle to the adoption of UAS for research use (Vincent et al., 2015). There are currently three main legal frameworks governing UAS operations within the United States: permissions/regulations for a specific organization (e.g., a university) granted under a Certificate of Authorization (COA) from the Federal Aviation Administration (FAA), regulations for commercial operations (described in Title 14 of the Code of Federal Regulations Part 107 and colloquially referred to as “Part 107 rules”), and regulations for recreational operations (described in Chapter 448 of Title 49, US Code, Section 44809, and colloquially referred to as “Recreational Flyer rules”). COAs are generally labor-intensive to set up and maintain, as they require ongoing coordination with the FAA, but they can allow for operations not typically permitted under other regulatory frameworks. The commercial and recreational operational rules apply to individuals, rather than organizations, and have progressively become more clearly defined and permissive. For instance, a recent amendment to the Part 107 regulations clarified that the use of UAS by an institution of higher education for research or education purposes is considered “recreational use” and is subject to recreational operational rules rather than commercial operational rules. These rules applying to individuals allow for myriad opportunities to use UAS to collect ecological data without the complex organizational overhead required for a COA. However, the rules within each of these categories are still liable to change, and UAS pilots are responsible for staying aware of any updates.

UAS pilots in the United States must obtain some kind of credentials to operate UAS for research use. Researchers flying under a COA would obtain credentials according to the rules specific to their organization. Flying under Part 107 rules requires a “remote pilot certificate” from the FAA, which can be obtained by passing an initial knowledge examination, and expires after 2 years. Flying under Recreational Flyer rules requires a TRUST certificate from the FAA, which can be obtained by completing a recreational UAS safety test that does not expire. Unlike permissions granted under a COA, the Part 107 and TRUST credentials stay with the pilot and are transferable if the pilot changes organizations (e.g., a graduate student cannot operate a UAS for research under a university's COA after they graduate, but they would still retain their ability to operate with their FAA-granted credentials).

In general, there are some legal limits to the kinds of UAS flight operations allowed under any regulatory framework. UAS pilots are responsible for ensuring that their equipment and flight plan are in compliance with whichever regulatory framework they are operating under. As with pilot credentials, researchers operating under a COA would need to comply with the flight operational rules specific to their organization. Two of the most relevant flight restrictions for ecologists operating under both Part 107 and Recreational Flyer rules are as follows: (1) the UAS must be within visual line of sight of the pilot in command (or within a visual line of sight of another crew member acting as a “visual observer” as long as that observer has direct communication with the pilot in command) and (2) the UAS must fly no higher than 400 ft (122 m) AGL. Part 107 rules do constrain operations in other specific ways, which may also apply to flights under Recreational Flyer rules that prohibit unsafe operations. For instance, UAS cannot fly faster than 87 knots (161 km/h); UAS must be at least 500 ft (152 m) below clouds and 2000 ft (609 m) horizontally from clouds. However, high-quality optical data collection usually requires UAS operations to be well within these limits. Additional authorizations are needed to fly in “controlled” airspace (i.e., class B/C/D/E airspace, typically near airports), to fly a UAS above 55 lbs (24.9 kg), and to fly a UAS beyond the line of sight. Some of these authorizations are relatively easy to obtain (e.g., many requests to fly in controlled airspace below 400 ft [122 m] AGL can be automatically granted in near real time using the Low Altitude Authorization and Notification Capability), while others are nearly impossible (at the time of this writing) and are likely beyond the reach of an ecological data collection campaign (e.g., beyond visual line-of-sight flights). Finally, the drone itself may need to be marked and registered with the FAA. The FAA website is usually the best source of the most up-to-date information about the rules that might govern UAS research flights (https://www.faa.gov/uas/; accessed 11 March 2022).

It is important to connect with the appropriate land manager before flying on public land to obtain appropriate site access if necessary, to check for temporary closures (e.g., bird nesting), and to be a good neighbor. Because NEON does not own the land on which they operate, flying NEON sites will require contacting and obtaining permission from the site host; contact information is available on the NEON webpage for each site, and NEON staff may also help facilitate those connections. Additional non-NEON research is allowed at some but not all sites. If permission is obtained, it is important not to disturb any existing research being conducted at those sites, to maintain a 20-m buffer around any NEON-distributed plot, and to completely avoid the area of the tower airshed (which is also delineated on the NEON webpage for each site; e.g., https://www.neonscience.org/data-samples/data/spatial-data-maps). Clear communication with concerned parties of UAS flights for research, even if there is every legal right to fly at a particular location, is important for building community credibility and longevity for UAS as a tool for ecologists. Finally, as with flight planning, it is best practice to develop a routine and a checklist (see Appendix S1) for determining whether UAS flights are allowed in the intended survey area under the relevant regulatory framework.

Radiometric calibration

Optical data from UAS-mounted sensors must be radiometrically calibrated in order to convert otherwise arbitrary image pixel values into meaningful, standardized units such as reflectance. Applying image preprocessing steps (e.g., correcting for camera artifacts such as vignetting and dark noise) and subsequent radiometric calibration allows UAS data to be comparable with high-quality scientific data products derived from the NEON AOP. The empirical line method (ELM) has proved to be a simple and accurate UAS radiometric calibration option (Wang & Myint, 2015). ELM requires the placement of at least two materials such as calibrated reflectance panels with known reflectance in the scene, which are imaged while the sensor is in flight. These images containing the calibrated reflectance panels are then used to translate image pixel values to reflectance for each spectral band for the whole survey area. For some sensors, particularly low-cost multispectral sensors designed for agriculture, a downwelling light sensor (DLS also known as sunshine sensor) also records data about the illumination levels at the exact moment that each image is captured. This information is often incorporated into the SfM processing to partially correct for varying light conditions throughout the flight. Importantly, the DLS can help account for varying illumination from image to image, but it does not allow for conversion of the image pixel values into a standardized unit of reflectance the way that calibrated reflectance panels can.

NEON implements a complex algorithm to convert its imaging spectrometer data to units of reflectance (Karpowicz & Kampe, 2015) that is founded on a similar principle as ELM. A series of vicarious calibration flights are conducted with the NEON AOP before and after every field season (Leisso et al., 2014). They fly over two large tarps with 48% (medium gray) and 3% (black) reflectance, collect ground-based reflectance measurements of these tarps with an analytical spectral device, and use these data to verify the radiometric calibration of the NEON AOP imaging spectrometer (https://www.neonscience.org/data-collection/imaging-spectrometer). The reflectance of these tarps is meant to represent the upper and lower bounds of reflectance typically seen in nature. NEON's algorithm also compensates for the scattering and absorption of light as it travels through the atmosphere (e.g., haze, water vapor) on its optical path to the AOP.

Using three panels with varying gray levels will allow for the most flexibility in calibration methodology for UAS image data. Ideally, panels should be large enough to be imaged during flight and contain an area of 10 × 10 pixels (Wang & Myint, 2015). Panels should be matte (as opposed to shiny or glossy) with a smooth, horizontal surface (Smith & Milton, 1999). Panel colors should be shades of black (near 0% reflectance) and gray, ideally covering the range of reflectance for the subject of interest. White (near 100% reflectance) panels are not recommended because they can saturate and cause other issues (Cao et al., 2019). For plant surveys, we recommend a medium gray, dark gray, and black target because vegetation tends to be about 50% average reflectance or medium gray. Calibrated reflectance panels often come with the sensor to be integrated with the vehicle, but they can also be purchased separately or made at home. Care must be taken with homemade panels because, even though they may appear a particular shade to the human eye (visible spectrum), they may not be a similar reflectance across all wavelengths observed by a multispectral or hyperspectral sensor. Many studies have identified promising materials for homemade panels: plywood covered with matte paint (Rosas et al., 2020), gray linoleum, and black fine-weave cotton fabric (Cao et al., 2019).

Researchers have vastly different constraints for their budget, environmental conditions in the field, and equipment availability, so “good enough” may be more realistically attainable than the “ideal” radiometric calibration practices described above. If in-flight panel photographs are not possible or if only a small panel is available (as is often the case with panels that come with a sensor), photographs of the panel can be captured either before or after flight. Many off-the-shelf multispectral sensors only come with one small calibration panel, but having one panel is better than none even though this may limit the data calibration possibilities in the future. Further, popular commercial SfM software packages such as Agisoft Metashape and Pix4D may only accommodate one panel, so correcting UAS imagery with a single panel may be the only practical option. When only a single calibration panel is used, choosing a gray panel (rather than a white or black one) helps to avoid crushing or clipping in under/overexposed images.

Regardless of panel cost, color, or material, it is critical to clean, remeasure, recalibrate, and/or replace them over time to ensure the most accurate reflectance calibration possible. This is especially important when fieldwork involves exposing panels to harsh environmental conditions with dirt, dust, sand, sun, and any other types of physical damage or degradation. Illustrating this point, Scholl and Ku (2021) remeasured a calibrated reflectance panel after 3 years of fieldwork using a handheld ASD (ASD Inc., a Malvern Panalytical Company, Longmont, CO, USA) FieldSpec 4 spectrometer. Figure 2 depicts the manufacturer-provided panel reflectance spectrum from the time of purchase in 2017 (MicaSense) compared with the reflectance spectrum measured 3 years later with the handheld ASD. The reflectance of the panel has decreased by as much as 10% due to the presence of dirt and dust, especially in the shorter wavelengths. The manufacturer advises against cleaning this make and model of the calibration panel as it would force debris further into the pores of the panel material, though newer panels from this manufacturer can be cleaned (see https://support.micasense.com/hc/en-us/articles/360005163934-Calibrated-Reflectance-Panel-Care-Instructions). In general, it is key to ensure that the panel reflectance data being used for radiometric calibration accurately represent the panel's actual reflectance, either using the manufacturer-provided reflectance data for new/clean panels or using updated reflectance measurements on a panel that cannot be restored to its initial conditions.

Details are in the caption following the image
(a) Reflectance of a calibrated reflectance panel as a function of wavelength. The black solid line corresponds to the manufacturer-provided reflectance spectrum representing the panel's reflectance at the time of purchase. The blue dashed line corresponds to remeasurements of the panel's reflectance spectrum in 2020 with a handheld spectrometer, after 3 years of field use. The MicaSense RedEdge 3 spectral band ranges (blue, green, red, red edge, and near infrared) are depicted as vertical bars of color. The panel reflectance decreased between 2017 and 2020, with this decrease being more pronounced toward the shorter wavelengths. (b) A photograph of the calibrated reflectance panel measured in (a), taken in 2020 after 3 years of field use. The change in reflectance between 2017 and 2020 is likely the result of accumulated dust and sand from the field, as seen partially wiped away on the lower right corner of the panel's plastic case. The difference between the manufacturer-reported panel reflectance and the actual reflectance after heavy use demonstrates the necessity to clean, remeasure, or replace calibration panels when performing radiometric calibration.

Georeferencing

It is important to consider how the geographic positions of objects within the UAS survey are used to answer the research question. Those positions can range from being globally accurate with precise correspondence to a location on the Earth (e.g., the tree is located at these coordinates, ±5 cm) to being relatively accurate with the spatial relationships and real-world distances between objects in the scene preserved but perhaps all frame-shifted by some amount compared with reality (e.g., the first tree is 5 m away from the second tree, but all the trees are shifted 10 m compared with their true on-the-ground coordinates). In fact, it is possible for the SfM photogrammetry process to reconstruct three-dimensional models and orthomosaics of the area of interest solely using visual cues in individual images without any geolocation data at all, resulting in a relative accuracy between objects in the scene but no ability to make real-world measurements (e.g., the distance between the two trees is 5% of the width of the surveyed area). In order to infer units from these relative distances (e.g., to get the distance in meters), some measure of scale in the imagery is required. Geolocating the SfM photogrammetry products in real-world space requires external information about the geolocation of each input image, such as from the Global Navigation Satellite System (GNSS). Note that GNSS is the generic term for the network of satellites that offer global coverage of geospatial position, of which the US-owned GPS (Global Positioning System) is a part. Most popular off-the-shelf vehicles and/or optical payloads have a basic GNSS antenna and receiver with an accuracy of <10 m, and the optical data collected will be automatically geotagged in the image metadata. The automatic integration of these metadata in the most popular SfM photogrammetry software means that the second scenario described above—relative spatial accuracy, but with SfM products frame-shifted by some amount similar in magnitude to the GNSS receiver accuracy—is achievable with no extra steps by the user. If greater accuracy is required than what is provided by the built-in GNSS receiver, however, then additional steps are required.

Ground control points (GCPs), real-time kinematic (RTK) corrections, and post-processed kinematic (PPK) corrections are three solutions to accurately georeference images collected by the UAS. GCPs are markers laid out on the ground with known geolocations that are visible in the UAS data and are used to tie the UAS imagery to real-world coordinates during the SfM processing step. The GCP approach can only be as precise as the tool used to measure the geolocations of the GCPs in the field. To improve upon the geolocation accuracy already in place using image metadata geotags from the basic GNSS receiver that is likely onboard the UAS, a high-precision GNSS receiver must be used to mark the geolocations of the GPS. A high-precision GNSS may be prohibitively expensive, but could potentially be borrowed or rented from geodetic services (e.g., nonprofit UNAVCO allows the equipment to be borrowed for NSF-funded projects for free). Ideally, GCPs will be placed near edges or randomly throughout the mission area, but the density of GCPs is typically more important, with Santana et al. (2021) finding that 10 GCPs in their 2-ha area of interest were needed for sub-7 cm precision (but using as few as four GCPs still produced 16 cm precision at all flight heights and GCP spatial distributions). Zimmerman et al. (2020) found that it was optimal to place GCPs in the corners of the study site, as well as at low and high elevations within the study site. GCPs must be visible from the sensor, so it is best to place them in bright and open areas. Finding suitable locations in heavily forested areas with closed canopies can be challenging; therefore, it may be beneficial to expand survey areas to include suitable areas for GCPs if none can be found within the area of scientific interest. Examples of effective GCPs are fabric swaths placed in an X, bright-colored bucket lids, or checkered mats (Figure 3). GCPs with more conspicuous, precise points make for more precise geolocating because that specific point can be more easily matched between the field- and UAS-measured data. For instance, trying to identify the exact center of a bright-colored bucket lid from aerial imagery might allow for 10 cm of mismatch with the exact point measured on the ground, the intersection of two 5-cm-wide pieces of cloth might allow for 5 cm of mismatch, and the crisp intersection of the white and black triangles might only allow for 1 cm of mismatch (Figure 3). Because the field measurements of GCP locations can be a slow step, it might be advantageous to install permanent monuments at desirable GCP locations, measure their precise locations once, and then reuse those same points during future data collection (e.g., if not the conspicuous marker itself, perhaps a more discrete piece of rebar that can have the actual GCP draped over the top of it just prior to new data collection). Preexisting permanent (or semi-permanent) points may also be used if they can be readily measured on the ground and are visible from the air. For example, NEON TOS plots have permanent markers that have been georeferenced with high precision (approximately 0.3 m) that can be used as GCPs if they are visible to the UAS (Figure 1).

Details are in the caption following the image
Aerial red–green–blue (RBG) photograph captured using a Da-Jiang Innovations (DJI) Phantom 4 Pro on 23 January 2020 at 120 m of altitude above ground level. The photograph depicts three ground control points (GCPs) each of two different types in the center of the image: 1-m-long spray-painted orange cotton drop cloth in an “X” pattern and 1 × 1 m2 of cotton drop cloth spray-painted with black triangles. The GCPs are progressively more conspicuous under the canopy, in the shrub field, and on the dirt road. The size of the area covered by the main photograph is approximately 180 m wide × 120 m high.

RTK and PPK corrections augment the accuracy of a UAS's built-in GNSS receiver by correcting the noise inherent in the instrument using additional equipment and processing steps without the need for laying out GCPs and determining their locations. This can result in massive time savings, particularly when surveying large areas. For instance, Gillan et al. (2021) was able to survey and process data covering over 190 ha of rangeland in approximately 30 days versus an estimated 141 days using a conventional UAS workflow, with an estimated 47 days saved just from using an RTK system versus GCPs. Even with RTK and PPK corrections, it is still considered good practice to lay out some GCPs at precisely known locations, then quantify geolocation error in the final SfM products by measuring the difference between the field- and UAS-measured GCP locations.

Data management

Image data collected from a UAS can quickly become “big data,” and being intentional about data management will ease friction points at every step in the science workflow, from data collection to manuscript writing (Figure 4). Having a ballpark idea of the total anticipated data storage requirements will help guide data storage hardware purchases such as Secure Digital memory cards (SD cards), external hard drives, internal hard drives, network-attached storage (NAS), third-party cloud storage allotments, or university/organization-provided cloud storage allotments. Given the desired flight plan, the number of survey areas, and the payload (as determined by what meets the science requirements), it should be possible to estimate the amount of data that will be collected per flight, per survey area, and in total for the whole project. It is best practice to adhere as closely as possible to the “3-2-1 backup rule,” where three copies of the data exist with a local, accessible copy on two different devices (e.g., a local computer and an external hard drive) and one copy off-site (e.g., a cloud backup service) (Ruggiero & Heckathorn, 2012).

Details are in the caption following the image
Planning a data management pipeline is a large up-front investment but can save time and money in the long run, making it well worth prioritizing. Considering the storage, backup, and sharing needs of the datasets you anticipate collecting and processing ensure data persistency and availability. This data pipeline describes options and includes recommendations. There are trade-offs at each decision point, so it is important to understand your data needs and budget. For example, building your own data management system may be more affordable and tailored to your needs, but rented external storage systems may back up your data automatically and maintain the system and hardware requirements for you.

UAS optical data are typically collected on SD cards inserted in the sensor, so it is important to have enough SD cards prior to flights to accommodate the data being collected in the field. Formatting the SD cards (i.e., erasing all data on them) prior to a new flight is a good practice that ensures the full capacity of each SD card is available for new data collection (provided that the old data are safely transferred/managed to another medium—see below). Swapping out the SD card after each flight for an empty one is advisable, so that the only copy of freshly collected imagery data is not lost in the event of a UAS mishap on the next flight. Frequently transferring data from the SD cards to both a laptop hard drive and an external hard drive in the field satisfies the backup rule on having data stored on “two different devices.” Storing those two devices in different locations while in the field (e.g., in two different vehicles, or in the trunk and under the car seat) might prevent some types of data loss (e.g., theft of one of the devices). Once those data are transferred to other devices, it is safe to delete the images on the SD cards in order to reuse them. It is recommended to perform quality assurance (QA) checks on the images while it is still possible to recollect data. This could mean viewing the images on a laptop on-site, or while still on location near the field study site. Check the data for obvious artifacts such as over- or under-exposure in images, that the number of images expected was collected, that file sizes appear consistent and reasonable, and that necessary metadata was captured with each image (e.g., the geolocation). Generally, a full QA assessment cannot be performed in the field due to time and computation limitations, but the field QA should be sufficient to ensure the images can be processed into desired products. Some NEON sites (e.g., NIWO) have a field house that may be accessed, with permission, for laptop-friendly workspaces and/or charging options.

Once the data collection is completed, data management can be broken into a quick access phase-- when data need to be readily available (Figure 4, short-term storage), and a slower access phase-- which concerns the longer term storage of both data and metadata (Figure 4, long-term storage). During the quick access phase, the data should be as “close” to the workstation doing the SfM processing as possible—ideally on a fast internal hard drive (e.g., a solid-state drive) on the same computer as the SfM software. Having a good long-term storage solution for the imagery (and derived data products) is important for the slower access phase, and having a copy of those data off-site will satisfy the 3-2-1 backup rule. Some universities/organizations might already have storage infrastructure capable of accommodating vast data volumes and off-site backups (e.g., research computing storage). If university/organization storage infrastructure is not available, data storage-specific computing hardware (e.g., NAS) can be paired with third-party cloud storage (e.g., CyVerse) to meet long-term data management needs. In this case, using slower speed but lower cost spinning hard disks instead of solid-state drives is a good option for the local data backup because data volume (i.e., the ability to back up data for many projects) can be prioritized over data access speed in slower access phase. For such high volumes of data, establishing “data levels” that characterize how derived each new processed product is makes them easier to navigate and work with (Wyngaard et al., 2019). Typically, Level 0 represents raw data (the original images from the sensor in the case of optical data) and higher levels are derived from lower levels (e.g., figure 4 in Koontz et al., 2021 shows data levels for optical data collected for a forest ecology project).

For public-facing storage, we suggest publishing all data product levels to a long-term data repository with a digital object identifier in the open science spirit of broadening access to research (Figure 4, public-facing). Ideally, this includes the original raw images taken from UAS missions, which may be processed in the future to even higher quality products given the rapid advances in the SfM photogrammetry software. This can prove costly with particularly high data volumes, but it may be possible to rely on university/organization cyberinfrastructure resources, or other options that cater specifically to researchers aiming to practice open science principles (e.g., CyVerse and Open Science Framework).

Data processing

One common approach for processing UAS-derived imagery such that it can be integrated with other data sources (e.g., NEON) is SfM photogrammetry, which converts the original images into data products such as a two-dimensional orthomosaic and a three-dimensional point cloud. Many software applications are available for SfM photogrammetry that produce results of similar quality (Forsmoo et al., 2019), and many have steep discounts for research or educational use (e.g., Agisoft Metashape and Pix4DMapper). Some free, open-source options are also available (e.g., OpenDroneMap) and are steadily improving. SfM photogrammetry can be CPU- (central processing unit), RAM-, disk drive-, and GPU- (graphics processing unit) intensive, so a workstation that balances these hardware components is ideal. Higher end gaming desktops are often sufficiently powerful workstations for processing images locally, but cloud-processing options also exist (e.g., university high-performance computing resources, add-on capabilities of the specific SfM software purchased, and CyVerse—see Swetnam et al., 2018). Even if most of the processing takes place in the cloud, it can still be beneficial to have a relatively powerful local machine in order to readily view and manipulate the resulting data products.

SfM workflows require myriad decisions about processing parameters, all of which might affect the quality of the resulting data products. An excellent SfM guide has been published by the US Geological Survey (USGS) for the Agisoft Metashape software (Over et al., 2021), and some researchers have experimented with various SfM processing parameter combinations to empirically determine optimal parameter sets for particular use cases (Tinkham & Swayze, 2021; Young et al., 2022) though some trial and error may still be required for new study systems. Some software packages allow for automating the SfM processing using coding scripts, which then serve as the transparent and reproducible record of the workflow. Other software workflows are based on a point-and-click graphical user interface (GUI), which requires the user to take note of the processing steps. It will eliminate some friction points with resulting SfM products (particularly the three-dimensional point cloud) to work in a coordinate reference system that measures local distances in true units of distance (e.g., the distance measured in meters with the Universal Transverse Mercator coordinate reference system rather than a longitude/latitude coordinate reference system). In any case, it is important to be consistent with the coordinate reference system for each of your data products (e.g., GNSS positions of GCPs, GNSS locations of UAS camera). When working with optical data, it may be necessary to “spectrally resample” the high spectral resolution NEON AOP in order to match the sensor payload of the UAS, whose spectral resolution is likely coarser and not aligned with that of the NEON instrument (Figure 5). Finally, calculating derived spectral indices such as the normalized difference vegetation index (NDVI; Rouse et al., 1973) from the original reflectance channels can help with data harmonization across multiple sensors by reducing some of their individual reflectance inaccuracies (Cao et al., 2019).

Details are in the caption following the image
(a) Relative spectral response of the MicaSense RedEdge 3 camera in five distinct spectral bands based on the quantum efficiency of the image sensor per wavelength and the bandpass filter transmission per wavelength. The dashed vertical lines in (a) demarcate the spectral extent of panel (b). (b) Relative spectral responses for two channels of the MicaSense RedEdge 3 camera plotted with the relative spectral responses for 20 channels of the National Ecological Observatory Network Airborne Observation Platform (NEON AOP) imaging spectrometer. Several channels of the NEON AOP instrument comprise each of the MicaSense RedEdge 3 channels, so the reflectance data from the NEON AOP are resampled (weighted, in effect) such that they can be used as though the NEON instrument exhibited the same spectral sensitivity as the MicaSense RedEdge 3 instrument.

After the SfM workflow is completed, there are many options for further processing the resulting data products (e.g., orthomosaics and point clouds) such that they can be integrated with NEON. Many free, open-source software tools exist for working with geospatial data products produced by UAS and NEON including QGIS (https://qgis.org/en/site/) for visualization and GUI-based manipulation of raster and vector data types, CloudCompare (https://www.danielgm.net/cc/) for visualization and GUI-based manipulation of point clouds, and a suite of packages (https://cran.r-project.org/web/views/Spatial.html) for the R programming language (R Core Team, 2021). Several packages have also been developed specifically for working with NEON data, including neonUtilities (Lunch et al., 2021), neonhs (Joseph & Wasser, 2021), geoNEON (National Ecological Observatory Network, 2020), and NeonTreeEvaluation (Weinstein et al., 2021). A recent review by Atkins et al. (2022) describes the ecosystem of R packages available for working with forestry data, many of which are relevant for the types of geospatial data produced by UAS and NEON. More generally, working with these kinds of high-resolution geospatial data, which are often classically “big,” can benefit from following the few simple rules recently outlined by Mahood et al. (2022).

CASE STUDY

Science requirements

Forest inventories describe the geolocation and physical attributes of individual trees, and provide critical information for management decision-making and advancing ecological theory (Young et al., 2022). Remote sensing approaches to creating forest inventories can cover more area than field-based methods at a lower cost per area, and recent approaches still allow for the characterization of individual trees (Weinstein et al., 2019). The NEON TOS collects field-based forest inventory data (the “Woody Plant Vegetation Structure” data product; DP1.10098.001) and remote sensing data in their AOP that have been used to generate forest inventory data (Weinstein et al., 2020). The field-based data are restricted to 20 × 20 m field plots, while the AOP data cover dozens of square kilometers but at moderately coarse resolution (10 cm for RGB imagery, 1 m for imaging spectrometer data, and 1 m for lidar data). UAS have the capacity to fill in missing scales of observation for creating forest inventories by capturing a broader spatial extent than field-based NEON data but at a finer spatial resolution than NEON AOP data. With this as a motivation, here we present a case study where we collect and process UAS data coincident with a NEON TOS plot to create a forest inventory. We then benchmark that forest inventory against the NEON TOS field data and describe how to extract individual tree-scale spectral information that is comparable to that collected by the NEON AOP. We use the previous section's “core principles” as a framework for describing our workflow, and provide all data and code to further aid our mental model building.

Vehicle

Our vehicle was a DJI Matrice 100 rotocopter with four propellers and a proven track record of safe, predictable flights. The vertical takeoffs and landings of the rotocopter-style drone allowed us to operate the vehicle from a clearing as small as the width of the dirt access road to the site. We used a piece of plywood laid on the ground as a flat, stable takeoff platform that would also help to minimize the amount of dust kicked up by the rotor wash during takeoff and landing. The Matrice 100 has a relatively high lift capacity that allows for a payload to be integrated and is heavier than many consumer rotocopters, which makes it both more stable in windy conditions and more challenging to transport beyond a road. We charged all vehicle batteries the night prior to the flight.

Payload

We captured imagery using two co-mounted sensors: a gimbal-stabilized DJI Zenmuse X3 RGB camera and a MicaSense RedEdge 3 sensor, which is sensitive to electromagnetic radiation in five distinct spectral channels across the visible and near-infrared wavelengths. The DJI Zenmuse X3 camera has a focal length of 3.6 mm, a sensor width of 6.17 mm, and a sensor height of 4.55 mm. The MicaSense RedEdge 3 sensor has a focal length of 5.5 mm, a sensor width of 4.8 mm, and a sensor height of 3.6 mm. We used a fixed mount and a prebuilt integration kit for the MicaSense RedEdge 3 made by the sensor manufacturer to integrate with our vehicle. This particular mount is angled such that the sensor faces approximately downward when the aircraft is tilted forward in flight, and the integration kit allows the sensor to share power with the vehicle batteries. The RedEdge 3 sensor's image capture mechanism operates independently from the flight planning app or the vehicle's flight computer, though deeper integration with specific vehicles is possible with newer versions of the sensor. Prior to flight, we connected to the RedEdge 3 sensor with a laptop via its built-in Wi-Fi to verify that the sensor's onboard GNSS receiver was functioning properly and to initiate image capture. We set the RedEdge 3 sensor to capture images at a rate of 1 image/s. We set the DJI Zenmuse X3 camera to capture images at a rate of 0.5 images/s. Using the quantum efficiency and filter bandpass sensitivity of an average RedEdge 3 sensor provided by MicaSense, we estimated the relative spectral response of the instrument, which characterizes how the sensor captures light across the electromagnetic spectrum (Figure 5). We provide the relative spectral response data in a format that makes it interoperable with the hsdar package (Lehnert et al., 2019).

Environment

Our data collection took place on a single day under mostly sunny, light wind conditions on 9 October 2019 starting at 2:00 PM Mountain Daylight Time. We ideally would have flown closer to solar noon to minimize shadows in the imagery, particularly this late in the year.

Flight planning

We used the Map Pilot for DJI application on a 2017 Wi-Fi+Cellular-equipped, 10.5-inch (26.7 cm) Apple iPad Pro for planning the flight. Map Pilot is a reliable, full-featured flight planning application that allows us to set flight parameters such as forward overlap, side overlap, and sensor angle. We flew at an altitude of 100 m, and set the forward overlap to 95% and side overlap to 80% (based on the built-in DJI RGB camera, the Zenmuse X3). We used a zero-degree sensor angle (i.e., downward/nadir facing) and added a perpendicular set of aerial transects to create a crosshatch flight pattern (Figure 1). We opted to plan flights with particularly high overlap so that we had the option to remove photographs at different intervals prior to SfM processing in order to test how various photograph densities affected our UAS-derived forest inventory benchmark against NEON TOS field data. The Map Pilot software determines flight parameters such as flight speed and distance between aerial transects based on the user-desired front/side overlap, as well as the field of view and image capture rate of the built-in DJI camera. We calculated these flight parameters as follows:
x ground , x 3 = a vehicle x sensor , x 3 f sensor , x 3 , y ground , x 3 = a vehicle y sensor , x 3 f sensor , x 3 , t vehicle = 1 o side , x 3 x ground , x 3 , s vehicle = 1 o front , x 3 ( y ground , x 3 ) s imaging , x 3 , $$ {\displaystyle \begin{array}{l}{x}_{\mathrm{ground},x3}={a}_{\mathrm{vehicle}}\frac{x_{\mathrm{sensor},x3}}{f_{\mathrm{sensor},x3}},\\ {}{y}_{\mathrm{ground},x3}={a}_{\mathrm{vehicle}}\frac{y_{\mathrm{sensor},x3}}{f_{\mathrm{sensor},x3}},\\ {}\kern1.5em {t}_{\mathrm{vehicle}}=\left(1-{o}_{\mathrm{side},x3}\right)\left({x}_{\mathrm{ground},x3}\right),\\ {}\kern1.5em {s}_{\mathrm{vehicle}}=\left(1-{o}_{\mathrm{front},x3}\right)\left({y}_{\mathrm{ground},x3}\right)\left({s}_{\mathrm{imaging},x3}\right),\end{array}} $$
where x ground , x 3 $$ {x}_{\mathrm{ground},x3} $$ is the horizontal dimension of the Zenmuse X3 sensor's ground footprint in meters, y ground , x 3 $$ {y}_{\mathrm{ground},x3} $$ is the vertical dimension of the Zenmuse X3 sensor's ground footprint in meters, a vehicle $$ {a}_{\mathrm{vehicle}} $$ is the vehicle's altitude during image capture in meters, x sensor , x 3 $$ {x}_{\mathrm{sensor},x3} $$ is the width of the Zenmuse X3 sensor in millimeters, y sensor , x 3 $$ {y}_{\mathrm{sensor},x3} $$ is the height of the Zenmuse X3 sensor in millimeters, f sensor , x 3 $$ {f}_{\mathrm{sensor},x3} $$ is the focal length of the Zenmuse X3 sensor, t vehicle $$ {t}_{\mathrm{vehicle}} $$ is the transect spacing between aerial transects of the vehicle in meters, o side , x 3 $$ {o}_{\mathrm{side},x3} $$ is the planned side overlap of the Zenmuse X3 imagery as a fraction, o front , x 3 $$ {o}_{\mathrm{front},x3} $$ is the planned front overlap of the Zenmuse X3 imagery as a fraction, s vehicle $$ {s}_{\mathrm{vehicle}} $$ is the speed of the vehicle in meters per second, and s imaging , x 3 $$ {s}_{\mathrm{imaging},x3} $$ is the planned imaging speed (i.e., photo capture rate) of the Zenmuse X3 sensor in images per second.
Because the MicaSense RedEdge 3 has a different optical geometry than the Zenmuse X3 camera, we can use the flight parameters calculated above to determine the actual overlap of the imagery from the MicaSense RedEdge 3:
x ground , RE 3 = a vehicle x sensor , RE 3 f sensor , RE 3 , y ground , RE 3 = a vehicle y sensor , RE 3 f sensor , RE 3 , o side , RE 3 = 1 t vehicle x ground , RE 3 , o front , RE 3 = 1 s vehicle s imaging , RE 3 ( y ground , RE 3 ) , $$ {\displaystyle \begin{array}{l}{x}_{\mathrm{ground},\mathrm{RE}3}={a}_{\mathrm{vehicle}}\frac{x_{\mathrm{sensor},\mathrm{RE}3}}{f_{\mathrm{sensor},\mathrm{RE}3}},\\ {}{y}_{\mathrm{ground},\mathrm{RE}3}={a}_{\mathrm{vehicle}}\frac{y_{\mathrm{sensor},\mathrm{RE}3}}{f_{\mathrm{sensor},\mathrm{RE}3}},\\ {}\kern1.6em {o}_{\mathrm{side},\mathrm{RE}3}=1-\frac{t_{\mathrm{vehicle}}}{x_{\mathrm{ground},\mathrm{RE}3}},\\ {}\kern1.3em {o}_{\mathrm{front},\mathrm{RE}3}=1-\frac{s_{\mathrm{vehicle}}}{\left({s}_{\mathrm{imaging},\mathrm{RE}3}\right)\left({y}_{\mathrm{ground},\mathrm{RE}3}\right)},\end{array}} $$
where x ground , RE 3 $$ {x}_{\mathrm{ground},\mathrm{RE}3} $$ is the horizontal dimension of the MicaSense RedEdge 3 sensor's ground footprint in millimeters, y ground , RE 3 $$ {y}_{\mathrm{ground},\mathrm{RE}3} $$ is the vertical dimension of the RedEdge 3 sensor's ground footprint in meters, x sensor , RE 3 $$ {x}_{\mathrm{sensor},\mathrm{RE}3} $$ is the width of the RedEdge 3 sensor in millimeters, y sensor , RE 3 $$ {y}_{\mathrm{sensor},\mathrm{RE}3} $$ is the height of the RedEdge 3 sensor in millimeters, f sensor , RE 3 $$ {f}_{\mathrm{sensor},\mathrm{RE}3} $$ is the focal length of the RedEdge 3 sensor, o side , RE 3 $$ {o}_{\mathrm{side},\mathrm{RE}3} $$ is the calculated side overlap of the RedEdge 3 imagery as a fraction, o front , RE 3 $$ {o}_{\mathrm{front},\mathrm{RE}3} $$ is the calculated front overlap of the RedEdge 3 imagery as a fraction, and s imaging , RE 3 $$ {s}_{\mathrm{imaging},\mathrm{RE}3} $$ is the planned imaging speed (i.e., photo capture rate) of the RedEdge 3 sensor in images per second.
Using the front and side overlap, we can estimate the approximate number of images captured of each point within the survey area for the Zenmuse X3 sensor, p x 3 $$ {p}_{x3} $$ , and the MicaSense RedEdge 3 sensor, p RE 3 $$ {p}_{\mathrm{RE}3} $$ , as:
p x 3 = 1 1 o side , x 3 1 o front , x 3 , p RE 3 = 1 1 o side , RE 3 1 o front , RE 3 . $$ {\displaystyle \begin{array}{l}\begin{array}{c}\kern1em {p}_{x3}\\ {}=\frac{1}{\left(1-{o}_{\mathrm{side},x3}\right)\left(1-{o}_{\mathrm{front},x3}\right)},\end{array}\\ {}\begin{array}{c}{p}_{\mathrm{RE}3}\\ {}=\frac{1}{\left(1-{o}_{\mathrm{side},\mathrm{RE}3}\right)\left(1-{o}_{\mathrm{front},\mathrm{RE}3}\right)}.\end{array}\end{array}} $$
The code for these calculations can be found at https://github.com/mikoontz/neon-drone-workflow/blob/master/workflow/02_preprocess-drone-data/03_drone_L0_image-overlap-calculator.R.

Our aerial transects were 17.14 m apart, our vehicle flew at 3.16 m/s, the side overlap of the RedEdge 3 imagery was 80.4%, and the front overlap of the RedEdge 3 imagery was 95.2%. The estimated number of photographs per point in the survey area was 200 for the Zenmuse X3 camera and 105.5 for the MicaSense RedEdge 3 sensor. The crosshatch flight plan effectively doubles the expected number of photographs per point to 400 for the X3 camera and 211.0 for the RedEdge 3 (Figure 1).

Regulations

We obtained permission to access the NIWO NEON site from the site host, the University of Colorado Boulder Mountain Research Station, and NEON itself. We flew under the FAA Part 107 rules for commercial drone operations with a current remote pilot certificate, and ensured that the airspace was free for operating the UAS.

Radiometric calibration

The MicaSense RedEdge 3 multispectral camera comes with a small gray-calibrated reflectance panel that reflects approximately 60% of light across the entire spectral extent captured by the sensor. We held the UAS over the panel and captured an image of the calibration panel prior to flight ensuring our shadow did not cover the panel. The RedEdge 3 also integrates a DLS, which faces upward and measures illumination at the same time as the downward-facing image capture. We included the calibration panel photographs in the SfM processing workflow and also enabled the image-to-image corrections from the DLS. When loading the calibration panel photographs into the SfM software, we set the “known reflectance” of the panel in each of the five spectral channels to be those that we measured for this particular panel (Figure 2), rather than those provided by the manufacturer.

Georeferencing

We laid out orange cloth Xs over the nine permanent markers within the NIWO_017 field site (red points in Figure 1). Five of these points were visible from the air. These GCPs were located within the center of the flight area, without any geolocation representation at the edges, which was not ideal (Santana et al., 2021; Zimmerman et al., 2020).

Data management

For data collection, we recorded each flight's imagery on a separate 32 gigabyte SD card rated at >90 MB/s write speed that we formatted prior to the flight. For multiday trips or if SD cards need to be reused, we transfer imagery from the SD cards to at least one portable solid-state hard drive (Samsung T series). Upon returning from the field, we transferred images from the SD cards (or portable solid-state hard drive, as the case may be) to two locations: (1) the solid-state hard drive on a local desktop gaming computer for short-term storage and processing, and (2) a NAS device with six spinning disk hard drives in a RAID array for long-term storage. Both the short-term storage (local desktop) and long-term storage (NAS) solutions are backed up to the cloud using a third-party backup client (Backblaze) at a cost of ~5.00 USD per terabyte per month. We use the same data levels as Koontz et al. (2021), except we did not process our data to Level 4. To allow for future data collection to integrate easily into this project, we compartmentalized each data product to a folder for the specific flight date (9 October 2019), which was housed in a folder for the specific flight location (NIWO_017). We used the Open Science Framework for public-facing storage (https://doi.org/10.17605/OSF.IO/ENBWU).

Data processing

We used a local desktop computer (Alienware Aurora R7 with an Intel Core i7-8700k 3.70-GHz hexacore processor and 64 gigabyte of RAM) for data processing. We followed the USGS workflow to process our raw MicaSense RedEdge 3 imagery into a digital surface model, an orthomosaic, and a dense point cloud using Agisoft Metashape version 1.6.1 (Over et al., 2021). We noted each step in the SfM process, as well as the parameter choices we made, in a .txt file (https://github.com/mikoontz/neon-drone-workflow/blob/master/workflow/03_structure-from-motion-of-drone-data/01_drone_agisoft-metashape-processing-steps.txt). We created a script to allow readers to download cropped versions of these SfM products that are relatively small in size in order to follow along with our post-SfM processing steps (https://github.com/mikoontz/neon-drone-workflow/blob/master/workflow/04_get-processed-example-drone-data/01_get-example-cropped-L1-and-L2-data.R). We used R for all post-SfM steps, particularly the sf package for working with vector data (Pebesma, 2018) and the terra package (Hijmans, 2021a) for working with raster data. The terra package is intended to be a replacement for the raster package (Hijmans, 2021b), but some other R packages have not yet migrated their codebase to use terra. In these cases, we coerce terra objects to be raster objects in order to preserve the interoperability of the various packages.

We classified the dense point cloud into “ground” and “nonground” points using a cloth simulation filter algorithm (Zhang et al., 2016) implemented in the lidR (Roussel et al., 2020; Roussel & Auty, 2021) package. Using the ground points, we interpolated a digital terrain model (DTM) representing the height of the ground (without the vegetation). We subtracted this DTM from the SfM-derived digital surface model (DSM) to create a canopy height model (CHM) representing the height of the vegetation in the survey area.

To integrate our UAS data with NEON TOS field data, we used the CHM to detect and segment individual tree crowns. We used a variable window filter to detect individual trees, which searches for all of the maximum heights in a CHM within a circle of variable radius defined by the height of each pixel in that CHM in turn (Popescu & Wynne, 2004). That is, pixels in the CHM representing taller vegetation will have a broader search radius within which the location of the maximum height is determined. We used the ForestTools package to implement this algorithm and used the optimal tree detection parameter set determined by Young et al. (2022) for a structurally complex mixed-conifer forest, with a variable search window defined using the following function:
r = 0.04 x , $$ \kern0.5em r=0.04x, $$
where r $$ r $$ is the radius of the variable search window and x $$ x $$ is the canopy height of the focal pixel.
Using the detected trees, we implemented a marker-controlled watershed segmentation algorithm using the ForestTools package to segment individual tree crowns (Plowright & Roussel, 2021). For each tree crown, we created a new geometry representing its bounding box (i.e., the smallest rectangle that fully contains the irregularly shaped crown polygon) in order to compare UAS-derived crown segments with those derived using deep learning approaches (Weinstein et al., 2019). We benchmarked our tree detection using the NeonTreeEvaluation package (Weinstein et al., 2021) to compare our tree detections with the NEON TOS field-collected tree locations and with previously annotated crown bounding boxes derived from NEON AOP imagery (Weinstein et al., 2021). The stem locations of the trees in the NEON TOS sites can be determined using a combination of the neonUtilities package (Lunch et al., 2021) to download the geoNEON package (National Ecological Observatory Network, 2020), but they are also directly available in the NeonTreeEvaluation package. Within the NIWO_017 plot, we detected 60% of the field-measured stems (i.e., a recall score of 0.6). Comparison with the annotated crowns using the compute_precision_recall() function in the NeonTreeEvaluation package also provides precision scores (a measure of the false-positive tree detection rate), which can be combined with the recall scores (a measure of the true-positive tree detection rate) in an integrated measure of predictive ability called the F-score:
p = T reference T UAS T UAS , r = T reference T UAS T reference F = 2 × p × r p + r , , $$ {\displaystyle \begin{array}{l}\begin{array}{c}p\\ {}=\frac{T_{\mathrm{reference}}\cap {T}_{\mathrm{UAS}}}{T_{\mathrm{UAS}}},\end{array}\\ {}\begin{array}{c}r\\ {}=\frac{T_{\mathrm{reference}}\cap {T}_{\mathrm{UAS}}}{T_{\mathrm{reference}}}\end{array}\\ {}\begin{array}{c}F\\ {}=\frac{2\times p\times r}{p+r},\end{array}\end{array}}, $$
where p $$ p $$ is the precision, r $$ r $$ is the recall, F $$ F $$ is the F-score, T reference $$ {T}_{\mathrm{reference}} $$ is the number of reference trees (e.g., those identified in a field survey), T UAS $$ {T}_{\mathrm{UAS}} $$ is the number of trees detected by the UAS, and T reference T UAS $$ {T}_{\mathrm{reference}}\cap {T}_{\mathrm{UAS}} $$ is the number of reference trees that were correctly detected by the drone (i.e., the true-positive detections).

For our comparison, we set the threshold argument of the compute_precision_recall() function to 0.1 such that a predicted tree was considered correctly predicted if the intersection of its bounding box with an annotated crown's bounding box divided by the area of the union of those bounding boxes is greater than 0.1. Our UAS-derived map of detected trees had a recall rate of 0.788 and a precision rate of 0.276, resulting in an F-score of 0.409. For comparison, the DeepForest algorithm's predictions for the locations of trees at NIWO_017 (Weinstein et al., 2021) had a recall rate of 0.861, a precision rate of 0.798, and an F-score of 0.828. The poorer performance of the UAS-derived tree detection approach suggests that a different combination of flight parameters, SfM photogrammetry parameters, or tree detection algorithm/parameters might be better suited to the subalpine forest at NIWO (Young et al., 2022).

To integrate our UAS data with NEON AOP reflectance data, we calculated NDVI from each sensor. We used the neonUtilities package to download the NEON AOP imaging spectrometer data (data product DP3.30006.001) that covers the NIWO_017 site from 2019, using the easting and northing of the centroid of the NIWO_017 plot and a 20-m buffer as arguments to the byTileAOP() function (Lunch et al., 2021). We used the neonhs (Joseph & Wasser, 2021) package to convert the raw NEON AOP data product into a raster object more readily manipulatable in R. Because the imaging spectrometer spectral response overlaps with, but does not perfectly align with, the spectral response of the MicaSense RedEdge 3 sensor, we spectrally resampled the NEON AOP data to match the spectral resolution of the MicaSense RedEdge 3 sensor using the hsdar package (Lehnert et al., 2019) and the relative spectral response that we derived (Figure 5). We used the UAS-derived orthomosaic and the spectrally resampled NEON AOP orthomosaic to calculate NDVI. Figure 6 shows the comparison between NDVI as captured by the NEON AOP flight in August and our UAS-derived NDVI from our flight in October. We used the exactextractr package to extract the mean and standard deviation of NDVI derived from the UAS, as well as the spectrally resampled NEON AOP for each segmented tree crown (Baston, 2021). Figure 7 shows the comparison between NDVI derived from the NEON AOP and the UAS at an individual tree scale.

Details are in the caption following the image
Normalized difference vegetation index (NDVI) image over NIWO_017 plot (a) derived from the National Ecological Observatory Network Airborne Observation Platform (NEON AOP) spectral imager using data collected in August 2019 (data collection flights over Niwot Ridge NEON site on 14, 15, 19, and 26 August 2019) and (b) derived from the MicaSense RedEdge 3 camera using data collected on 9 October 2019. The NEON AOP data were first spectrally resampled into the equivalent red and near-infrared bands of the MicaSense RedEdge 3 camera based on the relative spectral response of the RedEdge 3 instrument. The higher spatial resolution of the drone-derived data in (b) is apparent when compared to NEON AOP-derived data in (a). Note that the difference in NDVI between the images may derive from three main sources: phenological differences in the vegetation, differences in the flight conditions such as time of day and cloud cover, or differences in instrumentation.
Details are in the caption following the image
Normalized difference vegetation index (NDVI) for each detected tree crown in the NIWO_017 plot as derived from the National Ecological Observatory Network Airborne Observation Platform (NEON AOP) imaging spectrometer versus NDVI derived from the MicaSense RedEdge 3 sensor. The one-to-one line is red, and the best-fit curve from a general additive model is shown in blue with a 95% CI shown in gray. The NEON AOP data, collected closer to peak growing season, mostly have higher per-tree values of NDVI. The nonlinear model fit highlights how, at the individual tree scale, spectral responses vary differently throughout the year, which might prove useful for future work (e.g., classifying trees to species based on how variable their spectral response is throughout the year).

DISCUSSION

Macroecology will benefit from a “macroscope” to enable the study of broad-extent phenomena across multiple scales of biological, geophysical, and social processes (Beck et al., 2012; Dornelas et al., 2019; Lawton, 1996). The ideal macroscope comprises a nested array of tools that provide full coverage of spatial and temporal observational domains. In their complementarity, the value of multiple observational tools in concert is more than the sum of the parts (Dornelas et al., 2019). Pairing UAS with NEON partially completes the constellation of Earth-observing tools that contribute to the macroscope, and combines the flexibility of UAS with the high quality and consistency of NEON. In this work, we aid the adoption of these tools among macroecologists by providing a mental model—a contextual framework—and some practical considerations for their integration.

Challenges

Challenges remain for integrating UAS with NEON, but they are surmountable. Some of these challenges are fundamentally associated with “big,” cross-scale data. Integrating data across scales brings a host of potential pitfalls that could pollute inference if care is not taken to avoid them (Zipkin et al., 2021). Big data in ecology are relatively new (Farley et al., 2018), and approaches to UAS-derived big data are fairly ad hoc across researchers (Wyngaard et al., 2019). Maintaining supportive communities of practice, such as the High Latitude Drone Ecology Network (https://arcticdrones.org/), can help overcome some of these idiosyncratic approaches. In the same vein, NEON provides an aspirational target for UAS educational resources, which are critical to ensuring that would-be NEON/UAS users have the environmental data science skills necessary to turn their data into inference (Hampton et al., 2017).

The proliferation of reasonably low-cost, off-the-shelf, drone-ready sensors (many designed for precision agriculture use) creates a need for validation of whether those instruments produce “science-grade” data (which itself is a relative term, depending on what the specific science requirements are for a given project). This validation may be achieved via direct comparison of the low-cost sensors with “state-of-the-science” instruments using coincident flights (e.g., Fawcett et al., 2020). Clear documentation of data provenance including sensor characteristics, data acquisition methods (e.g., flight pattern), and data acquisition conditions (e.g., time of day, cloud cover) will enable more rigorous data integration across instruments. Thus, integrating UAS operations with NEON can help anchor the community to the common currency of NEON data types, organization, and collection protocols, which will enhance the interoperability of UAS data.

Cyberinfrastructure for managing and processing UAS data is not yet built in a way that encourages consistency between projects or researchers. In this way, NEON again provides an aspirational example for how purpose-built cyberinfrastructure can facilitate macroecology. In fact, the foundational resources for building a valuable architecture for UAS data may already be represented in other NSF-sponsored projects (e.g., CyVerse, OpenTopography, and Open Science Framework). UAS-enabled research would benefit greatly from data storage solutions and streamlined analysis pipelines that are intentionally built to support a wide variety of users and use cases.

Critically, “accessibility” and “democratization” of macroecology encompass a broad, multifaceted notion of availability for and usability by anyone, and obstacles to accessibility extend beyond those we sought to remedy here. That is, our work to increase access to the elusive broad-extent/fine-grain observational domain with a mental model and an open workflow is an important but incomplete effort toward accessible macroecology. Illustrating this point, the reduced cost of Landsat images brought more researchers into the user base from lower resourced institutions and underrepresented parts of the world to do more topically diverse science (Nagaraj et al., 2020), but some barriers to access still exist (Miller et al., 2016). For instance, three quarters of users are men, and 65% of users are academic researchers (Miller et al., 2016). The Landsat archive was undeniably made more accessible to the collective benefit of science and society (Miller, 2016; Nagaraj et al., 2020), but even broader access (and therefore greater value; Miller, 2016) is possible. Greater accessibility of UAS and NEON as tools for macroecology will similarly require their user communities to be self-reflective and proactive about identifying and eliminating barriers to entry (Nagy et al., 2021).

Future directions

We conclude with a set of research themes that are well suited for UAS/NEON integration with example ecology applications, which we hope provides a vision to be built upon:
  1. Filling in spatial scales missed by NEON data collection (e.g., collecting data on a similar vegetation type of a NEON site but outside of NEON's direct footprint, capturing data at spatial resolutions finer than 10 cm in order to measure post-disturbance vegetation recovery);
  2. Filling in temporal scales missed by NEON data collection (e.g., capturing data in a year when a NEON site is skipped by the AOP, capturing data at a site multiple times per year to understand how snowpack changes throughout the year, tracking individual plant phenology through time and linking to PhenoCam data, and understanding temporal trends in biodiversity);
  3. Opportunistic data collection (e.g., capturing data immediately after a disturbance event to measure its severity);
  4. Connecting NEON data to other Earth-observing systems using UAS data as a bridge (e.g., spectrally unmixing Landsat pixels to determine relative species compositions by matching UAS spectral measurements to NEON TOS field measurements; and coordinating NEON data collection with UAS and other data collection to expand the utility of NEON products (e.g., Chadwick et al., 2020; Wang et al., 2020));
  5. Supplementing NEON data using sensors that are not part of the NEON suite of sensors (e.g., thermal data to compare thermal regulation of different plant species, and measuring water stress in different trees across gradients of topoclimate);
  6. Validating lower cost, off-the-shelf payloads against the state-of-the-science NEON data collection (e.g., determining how well a multispectral imager designed for agriculture captures surface reflectance, and determining how well an algorithm detects the trees in a NEON vegetation structure plot);
  7. Replacing high-cost NEON AOP flights with lower cost alternatives (e.g., if the drone-derived data are “good enough” compared with the AOP, can we reduce the operational costs of the AOP?); and
  8. Using NEON data as a common currency for validating new methods (e.g., the case study we showed here, comparing a deep learning/orthomosaic-based approach and a variable window filter/CHM approach to detecting individual trees measured by the NEON TOS).

UAS can help ecologists harness the NEON data revolution with their complementary approach to measuring the understudied broad-extent/fine-grain observational domain. NEON's long-term, consistent, high-quality, continental-extent measurements enable data-driven discovery that is enhanced with new opportunities to explore cross-scale questions when paired with the relatively affordable, flexible measurements of UAS. We hope that by providing a mental model for data collection and integration, we remove some of the friction points associated with these tools and make them more accessible. Further democratizing macroecology will require community support for an open science ethos, which might include: low-cost cyberinfrastructure, open observatories, data networks, well-documented workflows, open education resources that increase data skills, and more inclusive practices that create opportunities for researchers across a diversity of career stages and institutions to participate in and contribute to “big data” macroecology. We envision NEON as an anchor for UAS-enabled ecology, with future research efforts that embrace the spirit of democratization and strive to broaden participation in this emerging discipline.

ACKNOWLEDGMENTS

Funding for the 2019 NEON Science Summit was provided by NSF Award number 1906144. Additional funding was provided by Earth Lab through the University of Colorado, Boulder's Grand Challenge Initiative, the Cooperative Institute for Research in Environmental Sciences, and the US Geological Survey North Central Climate Adaptation Science Center. Data collection was supported by the USDA Forest Service Western Wildland Environmental Threat Assessment Center. Megan E. Cattau was supported in part by the National Aeronautics and Space Administration New Investigator Program (grant number 80NSSC18K0750). Jennifer K. Balch was supported in part by the NSF's CAREER (grant number 1846384) and Macrosystems (grant number 2017889) programs. The NEON is a program sponsored by the NSF and operated under cooperative agreement by Battelle. This material is based in part upon work supported by the NSF through the NEON Program. We thank Chelsea Nagy, Jeff Sloan, Natalie Latysh, Harland Goldstein, and two anonymous reviewers for their feedback, which greatly improved the manuscript. Any use of trade, firm, or product names is for descriptive purposes only and does not imply endorsement by the US Government.

    CONFLICT OF INTEREST

    The authors declare no conflict of interest.

    DATA AVAILABILITY STATEMENT

    Data and code (Koontz et al., 2022) are available from the Open Science Framework: https://doi.org/10.17605/OSF.IO/ENBWU.