Journal list menu
Democratizing macroecology: Integrating unoccupied aerial systems with the National Ecological Observatory Network
Macroecology research seeks to understand ecological phenomena with causes and consequences that accumulate, interact, and emerge across scales spanning several orders of magnitude. Broad-extent, fine-grain information (i.e., high spatial resolution data over large areas) is needed to adequately capture these cross-scale phenomena, but these data have historically been costly to acquire and process. Unoccupied aerial systems (UAS or drones carrying a sensor payload) and the National Ecological Observatory Network (NEON) make the broad-extent, fine-grain observational domain more accessible to researchers by lowering costs and reducing the need for highly specialized equipment. Integration of these tools can further democratize macroecological research, as their strengths and weaknesses are complementary. However, using these tools for macroecology can be challenging because mental models are lacking, thus requiring large up-front investments in time, energy, and creativity to become proficient. This challenge inspired a working group of UAS-using academic ecologists, NEON professionals, imaging scientists, remote sensing specialists, and aeronautical engineers at the 2019 NEON Science Summit in Boulder, Colorado, to synthesize current knowledge on how to use UAS with NEON in a mental model for an intended audience of ecologists new to these tools. Specifically, we provide (1) a collection of core principles for collecting high-quality UAS data for NEON integration and (2) a case study illustrating a sample workflow for processing UAS data into meaningful ecological information and integrating it with NEON data collected on the ground—with the Terrestrial Observation System—and remotely—from the Airborne Observation Platform. With this mental model, we advance the democratization of macroecology by making a key observational domain—the broad-extent, fine-grain domain—more accessible via NEON/UAS integration.
Macroecology is the study of spatially extensive systems whose biological, geophysical, and social components interact dynamically both within and across spatiotemporal scales (Heffernan et al., 2014). Macroecology, in its explicit consideration of scale, extends from a rich history of basic ecological research seeking to explain patterns in nature (Levin, 1992; Turner, 1989). At the same time, macroecology is highly relevant to applied ecology, as the broader spatial extents studied reflect the scale at which many societally relevant challenges, and perhaps their solutions, arise (Heffernan et al., 2014; LaRue et al., 2021). The causes and consequences of phenomena under investigation in macroecology can span many spatial scales, which motivates a characteristic feature of the data to be brought to bear: They must often be simultaneously fine in grain (i.e., spatial resolution) and broad in extent (i.e., area covered) (Beck et al., 2012).
Ecologists typically face a data collection trade-off between grain and extent that constrains the observational domain of their research (Ernest, 2018; Estes et al., 2018). Indeed, the spatial and temporal observational domains of most ecology research are narrow (Estes et al., 2018). The grain/extent trade-off can sometimes be overcome, but at a high cost. For example, the Global Airborne Observatory collects high spatial and spectral resolution data at broad extents (Asner et al., 2007, 2012), but the price of data acquisition and processing tallies in the millions of US dollars (USD), even though the per-area cost is low (Asner et al., 2013). As another example, the US Forest Service Forest Inventory and Analysis program maintains a regular network of over 350,000 fine-grain field plots regularly spaced over the entire forested area of the United States (over 9.1 million km2; approximately 1 plot every 2400 ha) at an annual cost of tens of millions of dollars (Alvarez, 2020; Gillespie, 1999). Most science studies have relatively modest budgets and are conducted by just a few individuals (Heidorn, 2008). The modal award size from the National Science Foundation's (NSF) Division of Environmental Biology was about 200,000 USD between 2005 and 2010 (Hampton et al., 2013). While the fine-grain, broad-extent observational domain is invaluable for macroecology, it can be inaccessible to ecologists with resource or funding limitations.
Macroecology can be democratized when barriers to research participation are reduced (Guston, 2004), such as by lowering the cost of, or innovating past limitations to, access to relevant scales of observation. Removing these barriers improves science because the rate, direction, and quality of science are, in part, shaped by the available research inputs (Nagaraj et al., 2020). For instance, the cost of imagery from the archive of Landsat Earth observation imagery was reduced in 1995 and restrictions on sharing were relaxed, which dramatically increased the quantity, quality, and diversity of Landsat-enabled science (Nagaraj et al., 2020). The same archive became freely available in 2008 with concomitant benefits to projects that rely on Landsat observations (e.g., Picotte et al., 2020). The changing accessibility of Landsat data is noteworthy for macroecology, as the archive provides consistent global-extent, relatively fine-grain (30 m) imagery since 1984. Particularly when integrated with other tools whose purpose is to broaden research participation (such as the free, planetary-scale geographic information system “for everyone,” Google Earth Engine; Gorelick et al., 2017), Landsat imagery has led to breakthrough science that is “globally consistent and locally relevant” such as the first global map of forest cover changes over a decade-long period at a relatively fine scale (Hansen et al., 2013). Thus, democratized research stimulates revolutionary science.
In its democratic aim, the National Ecological Observatory Network (NEON) is revolutionary (Balch et al., 2020; NSF, 2013). NEON is a continental-scale observation facility in the United States comprising 81 sites within 20 ecoclimatically distinct domains and an operational lifespan on the order of decades (Keller et al., 2008; Schimel, 2013). NEON is designed to collect rigorous, consistent, long-term, and open access data to better understand how US ecosystems are changing, using a combination of field measurements obtained by trained personnel, ground- and aquatic-based automated sensors, and plane-based instruments that collect both active and passive remotely sensed data (Kampe et al., 2010; NSF, 2013). NEON observations span spatial scales, from measurements of individual organisms within small field plots to 10-cm resolution red–green–blue (RGB) imagery, 1-m imaging spectroscopy, and lidar (light detection and ranging) point clouds across hundreds of square kilometers, with measurements replicated across sites that span the continental extent of NEON (Keller et al., 2008; Musinsky et al., 2022). A stated goal of NEON is to democratize access to ecological research, particularly at broad extents (NSF, 2013)—its promise is continental-scale ecology for everyone. NEON pairs publicly available data with a strong outreach and education effort to help realize this promise. In this way, NEON broadens access to macroecology by reducing barriers to entry, particularly cost, fieldwork requirements, and technical expertise (Nagy et al., 2021). An “instrument” such as NEON collecting standardized data at such scales leads to inevitable trade-offs—in the specific times, locations, and type of data that are sampled. While the NEON data are on their own sufficient for advancing ecology, part of what makes NEON revolutionary is its foresight in facilitating connections to other ecological data. In this way, the fundamental limitations of NEON can be overcome with bridges to more targeted ecological studies.
UAS can also revolutionize ecology (Anderson & Gaston, 2013). UAS, comprising a vehicle and a payload, are increasingly being used to collect high spatial resolution information over relatively large spatial extents for ecological science applications (Wyngaard et al., 2019). The vehicle is also known as a “drone” or a “UAV” standing for “unoccupied aerial vehicle,” “unhumanned aerial vehicle,” “uncrewed aerial vehicle,” or “unmanned aerial vehicle,” though we support phasing out the gendered language of this last expansion (Joyce et al., 2021). The payload is the instrumentation carried by the vehicle beyond what is critical for flight operations, and gives the UAS its scientific value. Importantly, it is not the vehicle itself that enables ecological studies at heretofore inaccessible scales, but rather the vehicle's ability to position a data collecting payload (i.e., a sensor) in a repeatable, efficient, hard-to-reach manner. For example, one use case for UAS is structure-from-motion (SfM) photogrammetry, which generates a three-dimensional model of an area of interest using two-dimensional images from multiple overlapping viewing angles (Westoby et al., 2012). The minimum requirement for SfM photogrammetry is two-dimensional imagery, which can be captured from the ground using a handheld sensor (e.g., a digital camera) to great effect for some applications (Piermattei et al., 2019). A UAS-based camera can capture imagery from higher up in, or above, the canopy, which allows for measurement of higher vegetation strata (Kuželka & Surový, 2018), including total height for above-canopy applications. UAS-based SfM photogrammetry also increases the extent that can be covered with surveys (Jackson et al., 2020) because aerial transects are unimpeded by varied terrain and vegetation encountered on ground transects. Unimpeded aerial transects are also more reliably repeated than ground surveys that require navigating through vegetation and are likely to be less impactful to that vegetation. UAS provide an avenue to flexibly and affordably fill spatiotemporal gaps in data collected by traditional means—they can be deployed more frequently and capture finer grain data than airplane- and satellite-based platforms, and can cover greater extents than ground surveys.
UAS and NEON complement each other. Each can be a key tool for macroecology research, but their integration offers an opportunity to alleviate some of their fundamental constraints in a similar way as an integration of NEON with other Earth-observing networks (Balch et al., 2020; Nagy et al., 2021). NEON data derive from “state-of-the-science” instrumentation with thorough documentation and are standardized at a continental scale. NEON data collection is not only preplanned, which makes the resulting data somewhat predictable, but also rigid in space, time, and type. On the contrary, UAS operations are nimble and customizable, but the resulting data are relatively under-validated with data standards that are ad hoc, idiosyncratic, and lacking in consistency, which makes interoperability of those data across projects a challenge (Wyngaard et al., 2019). Realization of the benefits of UAS–NEON integration by ecologists is dually challenged by the relative novelty of these tools (Nagy et al., 2021; Wyngaard et al., 2019), as well as by a community gap in the data science skills needed to navigate their associated workflows (Balch et al., 2020; Hampton et al., 2017; Nagy et al., 2021). Not knowing where to start with two new tools is a daunting proposition, and unstructured efforts to gain practical proficiency for research often come at the expense of doing research itself (Olah & Carter, 2017). Reducing these barriers to proficiency therefore has tremendous research value.
Mental models help novices become experienced practitioners by providing a contextual framework for new knowledge (Knapp & D'Avanzo, 2010). A lack of a synthesized contextual framework for the practical use of UAS for ecology research, particularly for NEON integration, challenges the adoption of these tools and hampers their ability to democratize macroecology (Assmann et al., 2019; Wyngaard et al., 2019). We assembled a working group of participants at the 2019 NEON Science Summit in Boulder, Colorado, with a goal to synthesize current practical knowledge and provide a sample workflow to guide ecologists with a mental model for using UAS and integrating with NEON. In this work, we aim to lower the barrier to entry for using UAS and NEON to do ecology. Specifically, we focus on optical data collected by each tool over terrestrial sites and provide (1) a collection of what we consider to be the 10 core principles for integrating UAS with NEON (science requirements, vehicle, payload, environment, flight planning, rules/regulations, radiometric calibration, georeferencing, data management, and data processing) and (2) an illustration of these principles with a real-world, well-documented workflow that processes UAS data into meaningful ecological information, then integrates it with NEON Airborne Observation Platform (AOP) and Terrestrial Observation System (TOS) data at the NEON Niwot Ridge (NIWO) site.
CORE PRINCIPLES FOR UAS/NEON INTEGRATION
We support and extend one of Assmann et al.'s (2019) themes regarding research use of UAS in order to highlight the first core principle for integrating UAS with NEON: knowing what the science requirements are for the data to be collected and what data collection efforts are “good enough” to meet those requirements. Using NEON to advance ecology is a type of data-driven discovery, in which the high-quality, but rote, data collection occurs before the science questions are generated (Lindenmayer & Likens, 2018). UAS data collection can be more flexible and responsive, which makes it more suitable for discovery driven by particular questions posed ahead of time. Integration of UAS and NEON could therefore be considered a hybrid between data- and question-driven discovery, where there is a dynamic between creative use of the existing NEON data, generation of new specific questions, and augmentation of the existing NEON data with UAS data collection to help answer those questions. During this process, a clear science question helps guide the data collection/collation needs, which can minimize the amount of researcher energy spent on developing tools and workflows that ultimately prove to be superfluous (Mahood et al., 2022).
The vehicle in a UAS is the flying machine that holds the payload. One key distinction between vehicle types is whether rotor systems or fixed wings are used for lift (the upward force that keeps the vehicle in the air). Rotocopter vehicles (also known as “multicopters,” “multirotors,” “quadcopters,” “hexacopters,” or “octocopters” depending on the number of rotor systems) consist of a body and (usually) four to eight rotary systems that provide both lift and thrust (horizontal motion). These types of vehicles are characterized as “vertical takeoff and landing” (VTOL). Fixed-wing aircraft use wings for lift and use rotor systems only for thrust. Hybrid vehicles use rotor systems for lift during ascent and descent but fixed wings for lift during the flight, and are sometimes referred to as VTOL fixed-wing systems to highlight this combination of features. The structure and size of the vehicle determine its functionality in the field, and thus, a project's objectives can often help constrain the choices available. Rotocopter platforms are more maneuverable, often less expensive, easier to fly, and more transportable, and have a higher payload capacity relative to fixed-wing aircraft. For these reasons, rotocopters are often preferred by ecologists. On the contrary, fixed-wing aircraft have longer flight times with better battery usage and thus can cover larger areas more efficiently than rotocopters. For example, covering the full extent of a given AOP footprint (147.6 ± 107.2 km2 for core and relocatable sites) may be most efficiently conducted with a fixed-wing or hybrid vehicle. They are also more stable in adverse conditions (e.g., high winds) and have a safer recovery from motor power loss. VTOL fixed-wing systems can combine the efficiency of a fixed wing with the small takeoff/landing footprint of a rotocopter. A summary of the advantages and disadvantages of these vehicle types is found in Table 1.
|Vehicle||Rotocopter||Ease of takeoff and landing||Shorter flight time (~20 min)||Anderson and Gaston (2013), Goodbody et al. (2017), Pádua et al. (2017)|
|Fixed wing||Longer flight time (2+ h)||Higher minimum flight speed to keep it aloft (affecting overlap, image quality)||…|
|Covers large spatial extent||Complex takeoff/landing||…|
|More stable in wind||…|
|Vertical takeoff and landing fixed-wing hybrid||Simpler takeoff/landing||Newer technology||…|
|Longer flight time||Expensive||…|
|Payload||Red–green–blue camera||Small size||Limited spectral extent to visible wavelengths||Pádua et al. (2017), Adão et al. (2017)|
|Affordable||Spectrally overlapping, imprecise spectral information||…|
|Fine spatial resolution||…|
|Multispectral sensor||Small size||Limited spectral sampling typically in visible and infrared wavelengths||…|
|More precise spectral information||More complex data acquisition and post-processing||…|
|Hyperspectral/imaging spectrometer||High spectral resolution||Heavy||…|
|High spectral extent||Expensive||…|
|Very complex data acquisition and post-processing||…|
- Note: “…” indicates the content is repeated from the cell above.
A flat surface clear of obstructions (e.g., on dirt rather than grass, away from forest canopy) is ideal for UAS takeoffs and landings. VTOL systems require a smaller takeoff and landing footprint, which may be satisfied with only a small canopy gap, compared with vehicles that use fixed wings for lift, which require a “runway” for takeoff. Locating a suitable takeoff area may be challenging at some NEON field sites (e.g., NIWO, with dense canopy cover) and easy at others (e.g., San Joaquin Experimental Range, with an open woodland ecotype). Takeoffs and landings from a clean, stable, flat surface (e.g., plywood and car floor mat) will prevent dirt from obstructing or scratching the sensor lens and will make for a more controlled ascent/descent.
With any platform, vehicle endurance limitations and the mission goals will determine how many flights are required to complete data collection. In many cases, it will be necessary to use several batteries to keep the vehicle flying for the duration of a field day. Even if only one flight is needed to collect data, extra batteries are still valuable to have on hand in case the first flight does not go as planned and follow-up flights are required. Batteries from many vehicle manufacturers (e.g., Da-Jiang Innovations [DJI]) will automatically discharge after a period of nonuse as a safety feature, so it is good practice to wait to charge all batteries until near the time they will be used in order to ensure that they will be at their peak capacity when they are needed (e.g., do not charge them until the night before you need them). An energy source to charge batteries in the field, like a solar charger or gasoline-powered generator, may also be necessary for very long missions or multiple days of data collection. As a guideline, you can determine how many batteries your portable energy source can charge by determining its energy capacity in watt-hours (Wh), multiplying by 90% (making the calculation such that you leave 10% of the energy source's capacity rather than fully draining it), then dividing by the capacity of a single UAS battery in Wh and rounding down to the nearest whole number to account for any unpredictable inefficiencies. For instance, a Goal Zero Yeti 1400 battery (used successfully by some authors) can be charged with solar panels and stores 1400 Wh of energy, which results in 1260 Wh of usable energy if it were to be drained to 10% capacity. Each battery of the DJI Phantom 4 Pro aircraft (a common choice for mapping) stores 89.2 Wh of energy, so the Yeti 1400 should be able to charge about 14 batteries before it needs to be recharged itself (1260/89.2 = 14.13, which is 14 when rounded down). If an efficient 1-gallon (3.79-liter) gasoline-powered generator can produce 6000 Wh of energy, that results in 5400 Wh of usable energy, which is equivalent to charging about 60 batteries for the DJI Phantom 4 Pro.
The payload is the equipment carried by the UAS that collects data and combines with the vehicle to constitute the system (the “S” in “UAS”). In fact, despite the spotlight often being on the drone vehicle, the payload component is at least as important, since the main purpose of the vehicle is merely to position the payload where it needs to be in order to capture appropriate data. For ecologists interested in optical data, the payload may be a simple camera or a more specialized remote sensing sensor sensitive to particular wavelengths of electromagnetic radiation. The scientific questions will dictate the data requirements, which will in turn drive the payload decision. Typically, the selection of a sensor represents a trade-off between spatial resolution (the size of pixels in the imagery at a set altitude), spectral resolution (the number of distinct portions of the electromagnetic spectrum that the sensor can detect), spectral extent (how much of the electromagnetic spectrum the sensor can detect), and cost. For example, while imaging spectroscopy provides high spectral resolution and extent that may allow measurement of specific chemical compounds in vegetation (e.g., foliar nitrogen; Knyazikhin et al., 2013), a multispectral instrument with fewer spectral channels (Koontz et al., 2021) or even an RGB camera (Scholl et al., 2020) may be more than sufficient for classifying vegetation to species. Similarly, sensors with high spatial resolution can capture fine detail in their imagery but may reduce the ability to measure a variable of interest, such as individual trees, as post-processing steps can be negatively affected by the movement of those fine details in the wind (Young et al., 2022). Hyperspectral instruments and high-resolution cameras are relatively expensive in terms of purchase cost, post-processing time, and data storage requirements, but simple RGB and multispectral cameras can be affordably bought off the shelf, so it is worth considering whether they would suffice for the scientific question of interest. A summary of the advantages and disadvantages of these different payload types for collecting optical data can be found in Table 1.
It is also important to consider how the payload will be integrated with the vehicle, which generally requires considering the combination of the vehicle and payload simultaneously. In some cases, the payload can operate entirely independently from the vehicle, and integration only requires a means of physically attaching the components together. In other cases, the payload relies both on power and on electronic signaling from the vehicle in order to capture data, and integration may require more specialized electrical and mechanical engineering expertise. It is generally advisable to use a prebuilt integration kit or an already-integrated sensor/vehicle system if the payload meets the science requirements (or nearly so).
The environment of the UAS mission can affect both the equipment performance and the data collection such that the intended operational conditions must be considered during vehicle/payload selection and flight planning. Foremost, the vehicle and the payload must be capable of functioning in the desired environment. UAS flights at high elevations or in cold weather will drain the battery faster than at sea level, and some popular vehicles will not allow takeoff if the temperature is too cold (or hot). While some vehicles are designed to withstand light precipitation and dust, many would be damaged under such flight conditions. Heavy winds can push the UAS off course or require the UAS to work harder to maintain its course, which drains the battery faster and reduces endurance. Variable terrain within the survey area may also affect vehicle endurance, as more energy is required to ascend and descend while also traversing along flight transects in the horizontal plane. Managing the temperature of the mission critical electronics is just as important as that of the vehicle's batteries during UAS operations. The vehicle remote controller and any other peripherals such as a tablet computer are susceptible to battery drain in extreme temperatures, and cold temperature can cause the vehicle and/or sensor to malfunction. The NEON field sites exhibit a wide range of conditions that can impact UAS operations. For instance, the mean annual temperature for NEON AOP sites ranges from −12°C at the Utqiaġvik site in Alaska to 25°C at Lajas Experimental Station in Puerto Rico (NEON Field Site Metadata; https://www.neonscience.org/sites/default/files/NEON_Field_Site_Metadata_20210226_0.csv; accessed 16 March 2021). Expectations of unfavorable environmental conditions may be enough to dictate what equipment should comprise the UAS. For example, high-wind conditions at NIWO may warrant a fixed-wing platform; however, the dense forest would make takeoff and landing much easier with a rotocopter. In some cases, steps can be taken to mitigate the unfavorable environmental conditions, such as keeping equipment out of direct sunlight to prevent overheating (to the point of adding sun umbrellas or shade tarps to the required equipment list) and storing batteries in a dry cooler when not in use in order to insulate them against temperature extremes.
Environmental conditions may also impact data collection on automated flights, particularly for optical data. Ideal conditions for optical data collection are evenly lit with either complete cloud cover or clear skies. If flying takes place under clear sky conditions, then the sun should be high in the sky, so it does not cast long shadows—ideally within a couple of hours of solar noon (i.e., 10:00 AM and 2:00 PM for standard time, and 11:00 AM to 3:00 PM for regions that observe daylight saving time) (Assmann et al., 2019). Note that some SfM software guidelines specifically suggest not flying near solar noon, as this can create particularly bright areas within each image that challenges the SfM algorithms (MapsMadeEasy; https://www.mapsmadeeasy.com/data_collection; accessed 19 November 2021).
Prior to flights, it is important to ensure that the weather will be favorable for data collection. A handheld instrument for measuring temperature, relative humidity, and wind speed may also aid in the reporting of flight conditions, though note that the wind speed at flight altitude may be different than what is measured on the ground. In many cases, taking a picture of the sky and a screenshot of the weather forecast from a reputable source (e.g., the National Oceanic and Atmospheric Administration) is a convenient and sufficient way to ensure later reporting on flight conditions. In fact, the NEON AOP does exactly this for their daily flight reports.
One of the key benefits of UAS operations is the ability to program missions to be automatically followed by the vehicle's onboard flight software. For optical data collection such as that required for SfM photogrammetry, the mission typically involves aerial transects with images captured at regular time or distance intervals so that objects in a scene are imaged from many viewing angles (often in excess of 100; Figure 1). Successful flight planning requires consideration of the flight parameters, flight planning software, and operational routine.
The flight parameters are crucial determinants of whether or not the SfM photogrammetry will successfully create a digital model of the survey area. Flight parameters are typically described in terms of the front and side overlap of the resulting imagery, as well as the sensor angle. The front overlap is a function of flight speed, flight altitude, frequency of image capture, and the vertical field of view of the sensor, while the side overlap is a function of flight altitude, horizontal field of view of the sensor, and distance between transects. Overlap in excess of 80% for both front and side overlap (Dandois et al., 2015) and even as high as 95% front overlap (Frey et al., 2018; Torres-Sánchez et al., 2018) is required for successful photogrammetric reconstructions of more complex vegetation (such as denser forests) using commonly available processing software. Lower overlap may be sufficient for two-dimensional mapping quality, though the processed product may not penetrate deeply into canopy gaps (Dandois et al., 2015), and image artifacts such as “leaning” objects, which were only imaged from an oblique angle, are more prevalent. Additional overlap can be achieved by augmenting parallel transects with a second set of parallel transects rotated 90° to the first (a crosshatch pattern; Figure 1). Additional viewing angles can be achieved by tilting the sensor off nadir in order to capture oblique imagery, which can aid in scene reconstruction (Cunliffe et al., 2016; James & Robson, 2014). Published work exists that determines optimal flight parameters for creating digital representations of specific survey areas (Dandois & Ellis, 2013; Díaz et al., 2020; Frey et al., 2018; Nesbit & Hugenholtz, 2019; Ni et al., 2018; Swayze et al., 2021; Torres-Sánchez et al., 2018; Young et al., 2022), but it still may require some trial and error to optimize parameters for a new study area or system.
Flight planning is typically achieved using specialized software, sometimes run on a separate device such as a tablet computer. Most flight software allows for setting the altitude and the desired forward and side overlap for a given aircraft and sensor. Two other important software features that may routinely be relevant for ecology are terrain following and Internet-free operations. Terrain following enables the vehicle to ascend and descend to match topographic changes within the area of interest, such that approximately the same altitude above ground level (AGL) is maintained throughout all aerial transects. This serves two key functions: It ensures the safety of the vehicle, and it maintains approximately the same ground sampling distance for imagery, which aids in processing. Some missions are most easily created once in the field in order to incorporate better information on the area of interest, takeoff/landing locations, and visibility throughout the flight. An ability for the software to function offline and to cache background map imagery can be critical for realworld UAS use. The flight software is resource-intensive and generally requires a computer or tablet with relatively high computing power. We have experienced flight software freezing mid-flight due to computing resource overload when using tablets that were not up to the task, which can create a hazardous situation. It is likely worth investing in a device with faster processors and/or more random access memory (RAM). Finally, some flight software packages provide additional functionality if the tablet has geolocation services—an ability to determine its location on the Earth by connecting with satellite networks. For instance, the flight software may display the tablet's location on the background map during the flight or even update the “home point” location for the UAS during the mission as the pilot moves around. The home point is the location to which the UAS returns and lands after a mission is completed, a battery is depleted, or the pilot triggers a manual “return to home” command. An updating home point might allow the pilot to traverse the landscape to stay closer to the UAS, thereby better maintaining a visual line of sight or allowing the UAS to collect more data per flight since the travel distance to the landing point is minimized (during which time data typically are not collected). Not all tablets have geolocation services; as of this writing, the Cellular+Wi-Fi version of the Apple iPad Pro has geolocation services, but the Wi-Fi-only version does not.
A final consideration for successful flight planning is to create a routine for consistently executing missions. Consistent repetition of routine steps prior to, during, and after a flight ensures that all components of the UAS work as intended in concert with each other, and checklists facilitate this consistency (Degani & Wiener, 1993). We highly recommend developing and using some kind of checklist for UAS operations (Appendix S1)—there is good reason they are part of standard operations for a range of aviators from pilots of small private aircraft to NEON AOP to NASA astronauts! Some applications (such as Kittyhawk; https://kittyhawk.io/) allow for automatic logging of checklist run-throughs, which further reduces barriers to their use.
In the United States, research use of UAS must comply with legal regulations that govern flight operations. These restrictions have historically been cited as a hurdle to the adoption of UAS for research use (Vincent et al., 2015). There are currently three main legal frameworks governing UAS operations within the United States: permissions/regulations for a specific organization (e.g., a university) granted under a Certificate of Authorization (COA) from the Federal Aviation Administration (FAA), regulations for commercial operations (described in Title 14 of the Code of Federal Regulations Part 107 and colloquially referred to as “Part 107 rules”), and regulations for recreational operations (described in Chapter 448 of Title 49, US Code, Section 44809, and colloquially referred to as “Recreational Flyer rules”). COAs are generally labor-intensive to set up and maintain, as they require ongoing coordination with the FAA, but they can allow for operations not typically permitted under other regulatory frameworks. The commercial and recreational operational rules apply to individuals, rather than organizations, and have progressively become more clearly defined and permissive. For instance, a recent amendment to the Part 107 regulations clarified that the use of UAS by an institution of higher education for research or education purposes is considered “recreational use” and is subject to recreational operational rules rather than commercial operational rules. These rules applying to individuals allow for myriad opportunities to use UAS to collect ecological data without the complex organizational overhead required for a COA. However, the rules within each of these categories are still liable to change, and UAS pilots are responsible for staying aware of any updates.
UAS pilots in the United States must obtain some kind of credentials to operate UAS for research use. Researchers flying under a COA would obtain credentials according to the rules specific to their organization. Flying under Part 107 rules requires a “remote pilot certificate” from the FAA, which can be obtained by passing an initial knowledge examination, and expires after 2 years. Flying under Recreational Flyer rules requires a TRUST certificate from the FAA, which can be obtained by completing a recreational UAS safety test that does not expire. Unlike permissions granted under a COA, the Part 107 and TRUST credentials stay with the pilot and are transferable if the pilot changes organizations (e.g., a graduate student cannot operate a UAS for research under a university's COA after they graduate, but they would still retain their ability to operate with their FAA-granted credentials).
In general, there are some legal limits to the kinds of UAS flight operations allowed under any regulatory framework. UAS pilots are responsible for ensuring that their equipment and flight plan are in compliance with whichever regulatory framework they are operating under. As with pilot credentials, researchers operating under a COA would need to comply with the flight operational rules specific to their organization. Two of the most relevant flight restrictions for ecologists operating under both Part 107 and Recreational Flyer rules are as follows: (1) the UAS must be within visual line of sight of the pilot in command (or within a visual line of sight of another crew member acting as a “visual observer” as long as that observer has direct communication with the pilot in command) and (2) the UAS must fly no higher than 400 ft (122 m) AGL. Part 107 rules do constrain operations in other specific ways, which may also apply to flights under Recreational Flyer rules that prohibit unsafe operations. For instance, UAS cannot fly faster than 87 knots (161 km/h); UAS must be at least 500 ft (152 m) below clouds and 2000 ft (609 m) horizontally from clouds. However, high-quality optical data collection usually requires UAS operations to be well within these limits. Additional authorizations are needed to fly in “controlled” airspace (i.e., class B/C/D/E airspace, typically near airports), to fly a UAS above 55 lbs (24.9 kg), and to fly a UAS beyond the line of sight. Some of these authorizations are relatively easy to obtain (e.g., many requests to fly in controlled airspace below 400 ft [122 m] AGL can be automatically granted in near real time using the Low Altitude Authorization and Notification Capability), while others are nearly impossible (at the time of this writing) and are likely beyond the reach of an ecological data collection campaign (e.g., beyond visual line-of-sight flights). Finally, the drone itself may need to be marked and registered with the FAA. The FAA website is usually the best source of the most up-to-date information about the rules that might govern UAS research flights (https://www.faa.gov/uas/; accessed 11 March 2022).
It is important to connect with the appropriate land manager before flying on public land to obtain appropriate site access if necessary, to check for temporary closures (e.g., bird nesting), and to be a good neighbor. Because NEON does not own the land on which they operate, flying NEON sites will require contacting and obtaining permission from the site host; contact information is available on the NEON webpage for each site, and NEON staff may also help facilitate those connections. Additional non-NEON research is allowed at some but not all sites. If permission is obtained, it is important not to disturb any existing research being conducted at those sites, to maintain a 20-m buffer around any NEON-distributed plot, and to completely avoid the area of the tower airshed (which is also delineated on the NEON webpage for each site; e.g., https://www.neonscience.org/data-samples/data/spatial-data-maps). Clear communication with concerned parties of UAS flights for research, even if there is every legal right to fly at a particular location, is important for building community credibility and longevity for UAS as a tool for ecologists. Finally, as with flight planning, it is best practice to develop a routine and a checklist (see Appendix S1) for determining whether UAS flights are allowed in the intended survey area under the relevant regulatory framework.
Optical data from UAS-mounted sensors must be radiometrically calibrated in order to convert otherwise arbitrary image pixel values into meaningful, standardized units such as reflectance. Applying image preprocessing steps (e.g., correcting for camera artifacts such as vignetting and dark noise) and subsequent radiometric calibration allows UAS data to be comparable with high-quality scientific data products derived from the NEON AOP. The empirical line method (ELM) has proved to be a simple and accurate UAS radiometric calibration option (Wang & Myint, 2015). ELM requires the placement of at least two materials such as calibrated reflectance panels with known reflectance in the scene, which are imaged while the sensor is in flight. These images containing the calibrated reflectance panels are then used to translate image pixel values to reflectance for each spectral band for the whole survey area. For some sensors, particularly low-cost multispectral sensors designed for agriculture, a downwelling light sensor (DLS also known as sunshine sensor) also records data about the illumination levels at the exact moment that each image is captured. This information is often incorporated into the SfM processing to partially correct for varying light conditions throughout the flight. Importantly, the DLS can help account for varying illumination from image to image, but it does not allow for conversion of the image pixel values into a standardized unit of reflectance the way that calibrated reflectance panels can.
NEON implements a complex algorithm to convert its imaging spectrometer data to units of reflectance (Karpowicz & Kampe, 2015) that is founded on a similar principle as ELM. A series of vicarious calibration flights are conducted with the NEON AOP before and after every field season (Leisso et al., 2014). They fly over two large tarps with 48% (medium gray) and 3% (black) reflectance, collect ground-based reflectance measurements of these tarps with an analytical spectral device, and use these data to verify the radiometric calibration of the NEON AOP imaging spectrometer (https://www.neonscience.org/data-collection/imaging-spectrometer). The reflectance of these tarps is meant to represent the upper and lower bounds of reflectance typically seen in nature. NEON's algorithm also compensates for the scattering and absorption of light as it travels through the atmosphere (e.g., haze, water vapor) on its optical path to the AOP.
Using three panels with varying gray levels will allow for the most flexibility in calibration methodology for UAS image data. Ideally, panels should be large enough to be imaged during flight and contain an area of 10 × 10 pixels (Wang & Myint, 2015). Panels should be matte (as opposed to shiny or glossy) with a smooth, horizontal surface (Smith & Milton, 1999). Panel colors should be shades of black (near 0% reflectance) and gray, ideally covering the range of reflectance for the subject of interest. White (near 100% reflectance) panels are not recommended because they can saturate and cause other issues (Cao et al., 2019). For plant surveys, we recommend a medium gray, dark gray, and black target because vegetation tends to be about 50% average reflectance or medium gray. Calibrated reflectance panels often come with the sensor to be integrated with the vehicle, but they can also be purchased separately or made at home. Care must be taken with homemade panels because, even though they may appear a particular shade to the human eye (visible spectrum), they may not be a similar reflectance across all wavelengths observed by a multispectral or hyperspectral sensor. Many studies have identified promising materials for homemade panels: plywood covered with matte paint (Rosas et al., 2020), gray linoleum, and black fine-weave cotton fabric (Cao et al., 2019).
Researchers have vastly different constraints for their budget, environmental conditions in the field, and equipment availability, so “good enough” may be more realistically attainable than the “ideal” radiometric calibration practices described above. If in-flight panel photographs are not possible or if only a small panel is available (as is often the case with panels that come with a sensor), photographs of the panel can be captured either before or after flight. Many off-the-shelf multispectral sensors only come with one small calibration panel, but having one panel is better than none even though this may limit the data calibration possibilities in the future. Further, popular commercial SfM software packages such as Agisoft Metashape and Pix4D may only accommodate one panel, so correcting UAS imagery with a single panel may be the only practical option. When only a single calibration panel is used, choosing a gray panel (rather than a white or black one) helps to avoid crushing or clipping in under/overexposed images.
Regardless of panel cost, color, or material, it is critical to clean, remeasure, recalibrate, and/or replace them over time to ensure the most accurate reflectance calibration possible. This is especially important when fieldwork involves exposing panels to harsh environmental conditions with dirt, dust, sand, sun, and any other types of physical damage or degradation. Illustrating this point, Scholl and Ku (2021) remeasured a calibrated reflectance panel after 3 years of fieldwork using a handheld ASD (ASD Inc., a Malvern Panalytical Company, Longmont, CO, USA) FieldSpec 4 spectrometer. Figure 2 depicts the manufacturer-provided panel reflectance spectrum from the time of purchase in 2017 (MicaSense) compared with the reflectance spectrum measured 3 years later with the handheld ASD. The reflectance of the panel has decreased by as much as 10% due to the presence of dirt and dust, especially in the shorter wavelengths. The manufacturer advises against cleaning this make and model of the calibration panel as it would force debris further into the pores of the panel material, though newer panels from this manufacturer can be cleaned (see https://support.micasense.com/hc/en-us/articles/360005163934-Calibrated-Reflectance-Panel-Care-Instructions). In general, it is key to ensure that the panel reflectance data being used for radiometric calibration accurately represent the panel's actual reflectance, either using the manufacturer-provided reflectance data for new/clean panels or using updated reflectance measurements on a panel that cannot be restored to its initial conditions.
It is important to consider how the geographic positions of objects within the UAS survey are used to answer the research question. Those positions can range from being globally accurate with precise correspondence to a location on the Earth (e.g., the tree is located at these coordinates, ±5 cm) to being relatively accurate with the spatial relationships and real-world distances between objects in the scene preserved but perhaps all frame-shifted by some amount compared with reality (e.g., the first tree is 5 m away from the second tree, but all the trees are shifted 10 m compared with their true on-the-ground coordinates). In fact, it is possible for the SfM photogrammetry process to reconstruct three-dimensional models and orthomosaics of the area of interest solely using visual cues in individual images without any geolocation data at all, resulting in a relative accuracy between objects in the scene but no ability to make real-world measurements (e.g., the distance between the two trees is 5% of the width of the surveyed area). In order to infer units from these relative distances (e.g., to get the distance in meters), some measure of scale in the imagery is required. Geolocating the SfM photogrammetry products in real-world space requires external information about the geolocation of each input image, such as from the Global Navigation Satellite System (GNSS). Note that GNSS is the generic term for the network of satellites that offer global coverage of geospatial position, of which the US-owned GPS (Global Positioning System) is a part. Most popular off-the-shelf vehicles and/or optical payloads have a basic GNSS antenna and receiver with an accuracy of <10 m, and the optical data collected will be automatically geotagged in the image metadata. The automatic integration of these metadata in the most popular SfM photogrammetry software means that the second scenario described above—relative spatial accuracy, but with SfM products frame-shifted by some amount similar in magnitude to the GNSS receiver accuracy—is achievable with no extra steps by the user. If greater accuracy is required than what is provided by the built-in GNSS receiver, however, then additional steps are required.
Ground control points (GCPs), real-time kinematic (RTK) corrections, and post-processed kinematic (PPK) corrections are three solutions to accurately georeference images collected by the UAS. GCPs are markers laid out on the ground with known geolocations that are visible in the UAS data and are used to tie the UAS imagery to real-world coordinates during the SfM processing step. The GCP approach can only be as precise as the tool used to measure the geolocations of the GCPs in the field. To improve upon the geolocation accuracy already in place using image metadata geotags from the basic GNSS receiver that is likely onboard the UAS, a high-precision GNSS receiver must be used to mark the geolocations of the GPS. A high-precision GNSS may be prohibitively expensive, but could potentially be borrowed or rented from geodetic services (e.g., nonprofit UNAVCO allows the equipment to be borrowed for NSF-funded projects for free). Ideally, GCPs will be placed near edges or randomly throughout the mission area, but the density of GCPs is typically more important, with Santana et al. (2021) finding that 10 GCPs in their 2-ha area of interest were needed for sub-7 cm precision (but using as few as four GCPs still produced 16 cm precision at all flight heights and GCP spatial distributions). Zimmerman et al. (2020) found that it was optimal to place GCPs in the corners of the study site, as well as at low and high elevations within the study site. GCPs must be visible from the sensor, so it is best to place them in bright and open areas. Finding suitable locations in heavily forested areas with closed canopies can be challenging; therefore, it may be beneficial to expand survey areas to include suitable areas for GCPs if none can be found within the area of scientific interest. Examples of effective GCPs are fabric swaths placed in an X, bright-colored bucket lids, or checkered mats (Figure 3). GCPs with more conspicuous, precise points make for more precise geolocating because that specific point can be more easily matched between the field- and UAS-measured data. For instance, trying to identify the exact center of a bright-colored bucket lid from aerial imagery might allow for 10 cm of mismatch with the exact point measured on the ground, the intersection of two 5-cm-wide pieces of cloth might allow for 5 cm of mismatch, and the crisp intersection of the white and black triangles might only allow for 1 cm of mismatch (Figure 3). Because the field measurements of GCP locations can be a slow step, it might be advantageous to install permanent monuments at desirable GCP locations, measure their precise locations once, and then reuse those same points during future data collection (e.g., if not the conspicuous marker itself, perhaps a more discrete piece of rebar that can have the actual GCP draped over the top of it just prior to new data collection). Preexisting permanent (or semi-permanent) points may also be used if they can be readily measured on the ground and are visible from the air. For example, NEON TOS plots have permanent markers that have been georeferenced with high precision (approximately 0.3 m) that can be used as GCPs if they are visible to the UAS (Figure 1).
RTK and PPK corrections augment the accuracy of a UAS's built-in GNSS receiver by correcting the noise inherent in the instrument using additional equipment and processing steps without the need for laying out GCPs and determining their locations. This can result in massive time savings, particularly when surveying large areas. For instance, Gillan et al. (2021) was able to survey and process data covering over 190 ha of rangeland in approximately 30 days versus an estimated 141 days using a conventional UAS workflow, with an estimated 47 days saved just from using an RTK system versus GCPs. Even with RTK and PPK corrections, it is still considered good practice to lay out some GCPs at precisely known locations, then quantify geolocation error in the final SfM products by measuring the difference between the field- and UAS-measured GCP locations.
Image data collected from a UAS can quickly become “big data,” and being intentional about data management will ease friction points at every step in the science workflow, from data collection to manuscript writing (Figure 4). Having a ballpark idea of the total anticipated data storage requirements will help guide data storage hardware purchases such as Secure Digital memory cards (SD cards), external hard drives, internal hard drives, network-attached storage (NAS), third-party cloud storage allotments, or university/organization-provided cloud storage allotments. Given the desired flight plan, the number of survey areas, and the payload (as determined by what meets the science requirements), it should be possible to estimate the amount of data that will be collected per flight, per survey area, and in total for the whole project. It is best practice to adhere as closely as possible to the “3-2-1 backup rule,” where three copies of the data exist with a local, accessible copy on two different devices (e.g., a local computer and an external hard drive) and one copy off-site (e.g., a cloud backup service) (Ruggiero & Heckathorn, 2012).
UAS optical data are typically collected on SD cards inserted in the sensor, so it is important to have enough SD cards prior to flights to accommodate the data being collected in the field. Formatting the SD cards (i.e., erasing all data on them) prior to a new flight is a good practice that ensures the full capacity of each SD card is available for new data collection (provided that the old data are safely transferred/managed to another medium—see below). Swapping out the SD card after each flight for an empty one is advisable, so that the only copy of freshly collected imagery data is not lost in the event of a UAS mishap on the next flight. Frequently transferring data from the SD cards to both a laptop hard drive and an external hard drive in the field satisfies the backup rule on having data stored on “two different devices.” Storing those two devices in different locations while in the field (e.g., in two different vehicles, or in the trunk and under the car seat) might prevent some types of data loss (e.g., theft of one of the devices). Once those data are transferred to other devices, it is safe to delete the images on the SD cards in order to reuse them. It is recommended to perform quality assurance (QA) checks on the images while it is still possible to recollect data. This could mean viewing the images on a laptop on-site, or while still on location near the field study site. Check the data for obvious artifacts such as over- or under-exposure in images, that the number of images expected was collected, that file sizes appear consistent and reasonable, and that necessary metadata was captured with each image (e.g., the geolocation). Generally, a full QA assessment cannot be performed in the field due to time and computation limitations, but the field QA should be sufficient to ensure the images can be processed into desired products. Some NEON sites (e.g., NIWO) have a field house that may be accessed, with permission, for laptop-friendly workspaces and/or charging options.
Once the data collection is completed, data management can be broken into a quick access phase-- when data need to be readily available (Figure 4, short-term storage), and a slower access phase-- which concerns the longer term storage of both data and metadata (Figure 4, long-term storage). During the quick access phase, the data should be as “close” to the workstation doing the SfM processing as possible—ideally on a fast internal hard drive (e.g., a solid-state drive) on the same computer as the SfM software. Having a good long-term storage solution for the imagery (and derived data products) is important for the slower access phase, and having a copy of those data off-site will satisfy the 3-2-1 backup rule. Some universities/organizations might already have storage infrastructure capable of accommodating vast data volumes and off-site backups (e.g., research computing storage). If university/organization storage infrastructure is not available, data storage-specific computing hardware (e.g., NAS) can be paired with third-party cloud storage (e.g., CyVerse) to meet long-term data management needs. In this case, using slower speed but lower cost spinning hard disks instead of solid-state drives is a good option for the local data backup because data volume (i.e., the ability to back up data for many projects) can be prioritized over data access speed in slower access phase. For such high volumes of data, establishing “data levels” that characterize how derived each new processed product is makes them easier to navigate and work with (Wyngaard et al., 2019). Typically, Level 0 represents raw data (the original images from the sensor in the case of optical data) and higher levels are derived from lower levels (e.g., figure 4 in Koontz et al., 2021 shows data levels for optical data collected for a forest ecology project).
For public-facing storage, we suggest publishing all data product levels to a long-term data repository with a digital object identifier in the open science spirit of broadening access to research (Figure 4, public-facing). Ideally, this includes the original raw images taken from UAS missions, which may be processed in the future to even higher quality products given the rapid advances in the SfM photogrammetry software. This can prove costly with particularly high data volumes, but it may be possible to rely on university/organization cyberinfrastructure resources, or other options that cater specifically to researchers aiming to practice open science principles (e.g., CyVerse and Open Science Framework).
One common approach for processing UAS-derived imagery such that it can be integrated with other data sources (e.g., NEON) is SfM photogrammetry, which converts the original images into data products such as a two-dimensional orthomosaic and a three-dimensional point cloud. Many software applications are available for SfM photogrammetry that produce results of similar quality (Forsmoo et al., 2019), and many have steep discounts for research or educational use (e.g., Agisoft Metashape and Pix4DMapper). Some free, open-source options are also available (e.g., OpenDroneMap) and are steadily improving. SfM photogrammetry can be CPU- (central processing unit), RAM-, disk drive-, and GPU- (graphics processing unit) intensive, so a workstation that balances these hardware components is ideal. Higher end gaming desktops are often sufficiently powerful workstations for processing images locally, but cloud-processing options also exist (e.g., university high-performance computing resources, add-on capabilities of the specific SfM software purchased, and CyVerse—see Swetnam et al., 2018). Even if most of the processing takes place in the cloud, it can still be beneficial to have a relatively powerful local machine in order to readily view and manipulate the resulting data products.
SfM workflows require myriad decisions about processing parameters, all of which might affect the quality of the resulting data products. An excellent SfM guide has been published by the US Geological Survey (USGS) for the Agisoft Metashape software (Over et al., 2021), and some researchers have experimented with various SfM processing parameter combinations to empirically determine optimal parameter sets for particular use cases (Tinkham & Swayze, 2021; Young et al., 2022) though some trial and error may still be required for new study systems. Some software packages allow for automating the SfM processing using coding scripts, which then serve as the transparent and reproducible record of the workflow. Other software workflows are based on a point-and-click graphical user interface (GUI), which requires the user to take note of the processing steps. It will eliminate some friction points with resulting SfM products (particularly the three-dimensional point cloud) to work in a coordinate reference system that measures local distances in true units of distance (e.g., the distance measured in meters with the Universal Transverse Mercator coordinate reference system rather than a longitude/latitude coordinate reference system). In any case, it is important to be consistent with the coordinate reference system for each of your data products (e.g., GNSS positions of GCPs, GNSS locations of UAS camera). When working with optical data, it may be necessary to “spectrally resample” the high spectral resolution NEON AOP in order to match the sensor payload of the UAS, whose spectral resolution is likely coarser and not aligned with that of the NEON instrument (Figure 5). Finally, calculating derived spectral indices such as the normalized difference vegetation index (NDVI; Rouse et al., 1973) from the original reflectance channels can help with data harmonization across multiple sensors by reducing some of their individual reflectance inaccuracies (Cao et al., 2019).
After the SfM workflow is completed, there are many options for further processing the resulting data products (e.g., orthomosaics and point clouds) such that they can be integrated with NEON. Many free, open-source software tools exist for working with geospatial data products produced by UAS and NEON including QGIS (https://qgis.org/en/site/) for visualization and GUI-based manipulation of raster and vector data types, CloudCompare (https://www.danielgm.net/cc/) for visualization and GUI-based manipulation of point clouds, and a suite of packages (https://cran.r-project.org/web/views/Spatial.html) for the R programming language (R Core Team, 2021). Several packages have also been developed specifically for working with NEON data, including neonUtilities (Lunch et al., 2021), neonhs (Joseph & Wasser, 2021), geoNEON (National Ecological Observatory Network, 2020), and NeonTreeEvaluation (Weinstein et al., 2021). A recent review by Atkins et al. (2022) describes the ecosystem of R packages available for working with forestry data, many of which are relevant for the types of geospatial data produced by UAS and NEON. More generally, working with these kinds of high-resolution geospatial data, which are often classically “big,” can benefit from following the few simple rules recently outlined by Mahood et al. (2022).
Forest inventories describe the geolocation and physical attributes of individual trees, and provide critical information for management decision-making and advancing ecological theory (Young et al., 2022). Remote sensing approaches to creating forest inventories can cover more area than field-based methods at a lower cost per area, and recent approaches still allow for the characterization of individual trees (Weinstein et al., 2019). The NEON TOS collects field-based forest inventory data (the “Woody Plant Vegetation Structure” data product; DP1.10098.001) and remote sensing data in their AOP that have been used to generate forest inventory data (Weinstein et al., 2020). The field-based data are restricted to 20 × 20 m field plots, while the AOP data cover dozens of square kilometers but at moderately coarse resolution (10 cm for RGB imagery, 1 m for imaging spectrometer data, and 1 m for lidar data). UAS have the capacity to fill in missing scales of observation for creating forest inventories by capturing a broader spatial extent than field-based NEON data but at a finer spatial resolution than NEON AOP data. With this as a motivation, here we present a case study where we collect and process UAS data coincident with a NEON TOS plot to create a forest inventory. We then benchmark that forest inventory against the NEON TOS field data and describe how to extract individual tree-scale spectral information that is comparable to that collected by the NEON AOP. We use the previous section's “core principles” as a framework for describing our workflow, and provide all data and code to further aid our mental model building.
Our vehicle was a DJI Matrice 100 rotocopter with four propellers and a proven track record of safe, predictable flights. The vertical takeoffs and landings of the rotocopter-style drone allowed us to operate the vehicle from a clearing as small as the width of the dirt access road to the site. We used a piece of plywood laid on the ground as a flat, stable takeoff platform that would also help to minimize the amount of dust kicked up by the rotor wash during takeoff and landing. The Matrice 100 has a relatively high lift capacity that allows for a payload to be integrated and is heavier than many consumer rotocopters, which makes it both more stable in windy conditions and more challenging to transport beyond a road. We charged all vehicle batteries the night prior to the flight.
We captured imagery using two co-mounted sensors: a gimbal-stabilized DJI Zenmuse X3 RGB camera and a MicaSense RedEdge 3 sensor, which is sensitive to electromagnetic radiation in five distinct spectral channels across the visible and near-infrared wavelengths. The DJI Zenmuse X3 camera has a focal length of 3.6 mm, a sensor width of 6.17 mm, and a sensor height of 4.55 mm. The MicaSense RedEdge 3 sensor has a focal length of 5.5 mm, a sensor width of 4.8 mm, and a sensor height of 3.6 mm. We used a fixed mount and a prebuilt integration kit for the MicaSense RedEdge 3 made by the sensor manufacturer to integrate with our vehicle. This particular mount is angled such that the sensor faces approximately downward when the aircraft is tilted forward in flight, and the integration kit allows the sensor to share power with the vehicle batteries. The RedEdge 3 sensor's image capture mechanism operates independently from the flight planning app or the vehicle's flight computer, though deeper integration with specific vehicles is possible with newer versions of the sensor. Prior to flight, we connected to the RedEdge 3 sensor with a laptop via its built-in Wi-Fi to verify that the sensor's onboard GNSS receiver was functioning properly and to initiate image capture. We set the RedEdge 3 sensor to capture images at a rate of 1 image/s. We set the DJI Zenmuse X3 camera to capture images at a rate of 0.5 images/s. Using the quantum efficiency and filter bandpass sensitivity of an average RedEdge 3 sensor provided by MicaSense, we estimated the relative spectral response of the instrument, which characterizes how the sensor captures light across the electromagnetic spectrum (Figure 5). We provide the relative spectral response data in a format that makes it interoperable with the hsdar package (Lehnert et al., 2019).
Our data collection took place on a single day under mostly sunny, light wind conditions on 9 October 2019 starting at 2:00 PM Mountain Daylight Time. We ideally would have flown closer to solar noon to minimize shadows in the imagery, particularly this late in the year.
Our aerial transects were 17.14 m apart, our vehicle flew at 3.16 m/s, the side overlap of the RedEdge 3 imagery was 80.4%, and the front overlap of the RedEdge 3 imagery was 95.2%. The estimated number of photographs per point in the survey area was 200 for the Zenmuse X3 camera and 105.5 for the MicaSense RedEdge 3 sensor. The crosshatch flight plan effectively doubles the expected number of photographs per point to 400 for the X3 camera and 211.0 for the RedEdge 3 (Figure 1).
We obtained permission to access the NIWO NEON site from the site host, the University of Colorado Boulder Mountain Research Station, and NEON itself. We flew under the FAA Part 107 rules for commercial drone operations with a current remote pilot certificate, and ensured that the airspace was free for operating the UAS.
The MicaSense RedEdge 3 multispectral camera comes with a small gray-calibrated reflectance panel that reflects approximately 60% of light across the entire spectral extent captured by the sensor. We held the UAS over the panel and captured an image of the calibration panel prior to flight ensuring our shadow did not cover the panel. The RedEdge 3 also integrates a DLS, which faces upward and measures illumination at the same time as the downward-facing image capture. We included the calibration panel photographs in the SfM processing workflow and also enabled the image-to-image corrections from the DLS. When loading the calibration panel photographs into the SfM software, we set the “known reflectance” of the panel in each of the five spectral channels to be those that we measured for this particular panel (Figure 2), rather than those provided by the manufacturer.
We laid out orange cloth Xs over the nine permanent markers within the NIWO_017 field site (red points in Figure 1). Five of these points were visible from the air. These GCPs were located within the center of the flight area, without any geolocation representation at the edges, which was not ideal (Santana et al., 2021; Zimmerman et al., 2020).
For data collection, we recorded each flight's imagery on a separate 32 gigabyte SD card rated at >90 MB/s write speed that we formatted prior to the flight. For multiday trips or if SD cards need to be reused, we transfer imagery from the SD cards to at least one portable solid-state hard drive (Samsung T series). Upon returning from the field, we transferred images from the SD cards (or portable solid-state hard drive, as the case may be) to two locations: (1) the solid-state hard drive on a local desktop gaming computer for short-term storage and processing, and (2) a NAS device with six spinning disk hard drives in a RAID array for long-term storage. Both the short-term storage (local desktop) and long-term storage (NAS) solutions are backed up to the cloud using a third-party backup client (Backblaze) at a cost of ~5.00 USD per terabyte per month. We use the same data levels as Koontz et al. (2021), except we did not process our data to Level 4. To allow for future data collection to integrate easily into this project, we compartmentalized each data product to a folder for the specific flight date (9 October 2019), which was housed in a folder for the specific flight location (NIWO_017). We used the Open Science Framework for public-facing storage (https://doi.org/10.17605/OSF.IO/ENBWU).
We used a local desktop computer (Alienware Aurora R7 with an Intel Core i7-8700k 3.70-GHz hexacore processor and 64 gigabyte of RAM) for data processing. We followed the USGS workflow to process our raw MicaSense RedEdge 3 imagery into a digital surface model, an orthomosaic, and a dense point cloud using Agisoft Metashape version 1.6.1 (Over et al., 2021). We noted each step in the SfM process, as well as the parameter choices we made, in a .txt file (https://github.com/mikoontz/neon-drone-workflow/blob/master/workflow/03_structure-from-motion-of-drone-data/01_drone_agisoft-metashape-processing-steps.txt). We created a script to allow readers to download cropped versions of these SfM products that are relatively small in size in order to follow along with our post-SfM processing steps (https://github.com/mikoontz/neon-drone-workflow/blob/master/workflow/04_get-processed-example-drone-data/01_get-example-cropped-L1-and-L2-data.R). We used R for all post-SfM steps, particularly the sf package for working with vector data (Pebesma, 2018) and the terra package (Hijmans, 2021a) for working with raster data. The terra package is intended to be a replacement for the raster package (Hijmans, 2021b), but some other R packages have not yet migrated their codebase to use terra. In these cases, we coerce terra objects to be raster objects in order to preserve the interoperability of the various packages.
We classified the dense point cloud into “ground” and “nonground” points using a cloth simulation filter algorithm (Zhang et al., 2016) implemented in the lidR (Roussel et al., 2020; Roussel & Auty, 2021) package. Using the ground points, we interpolated a digital terrain model (DTM) representing the height of the ground (without the vegetation). We subtracted this DTM from the SfM-derived digital surface model (DSM) to create a canopy height model (CHM) representing the height of the vegetation in the survey area.
For our comparison, we set the threshold argument of the compute_precision_recall() function to 0.1 such that a predicted tree was considered correctly predicted if the intersection of its bounding box with an annotated crown's bounding box divided by the area of the union of those bounding boxes is greater than 0.1. Our UAS-derived map of detected trees had a recall rate of 0.788 and a precision rate of 0.276, resulting in an F-score of 0.409. For comparison, the DeepForest algorithm's predictions for the locations of trees at NIWO_017 (Weinstein et al., 2021) had a recall rate of 0.861, a precision rate of 0.798, and an F-score of 0.828. The poorer performance of the UAS-derived tree detection approach suggests that a different combination of flight parameters, SfM photogrammetry parameters, or tree detection algorithm/parameters might be better suited to the subalpine forest at NIWO (Young et al., 2022).
To integrate our UAS data with NEON AOP reflectance data, we calculated NDVI from each sensor. We used the neonUtilities package to download the NEON AOP imaging spectrometer data (data product DP3.30006.001) that covers the NIWO_017 site from 2019, using the easting and northing of the centroid of the NIWO_017 plot and a 20-m buffer as arguments to the byTileAOP() function (Lunch et al., 2021). We used the neonhs (Joseph & Wasser, 2021) package to convert the raw NEON AOP data product into a raster object more readily manipulatable in R. Because the imaging spectrometer spectral response overlaps with, but does not perfectly align with, the spectral response of the MicaSense RedEdge 3 sensor, we spectrally resampled the NEON AOP data to match the spectral resolution of the MicaSense RedEdge 3 sensor using the hsdar package (Lehnert et al., 2019) and the relative spectral response that we derived (Figure 5). We used the UAS-derived orthomosaic and the spectrally resampled NEON AOP orthomosaic to calculate NDVI. Figure 6 shows the comparison between NDVI as captured by the NEON AOP flight in August and our UAS-derived NDVI from our flight in October. We used the exactextractr package to extract the mean and standard deviation of NDVI derived from the UAS, as well as the spectrally resampled NEON AOP for each segmented tree crown (Baston, 2021). Figure 7 shows the comparison between NDVI derived from the NEON AOP and the UAS at an individual tree scale.
Macroecology will benefit from a “macroscope” to enable the study of broad-extent phenomena across multiple scales of biological, geophysical, and social processes (Beck et al., 2012; Dornelas et al., 2019; Lawton, 1996). The ideal macroscope comprises a nested array of tools that provide full coverage of spatial and temporal observational domains. In their complementarity, the value of multiple observational tools in concert is more than the sum of the parts (Dornelas et al., 2019). Pairing UAS with NEON partially completes the constellation of Earth-observing tools that contribute to the macroscope, and combines the flexibility of UAS with the high quality and consistency of NEON. In this work, we aid the adoption of these tools among macroecologists by providing a mental model—a contextual framework—and some practical considerations for their integration.
Challenges remain for integrating UAS with NEON, but they are surmountable. Some of these challenges are fundamentally associated with “big,” cross-scale data. Integrating data across scales brings a host of potential pitfalls that could pollute inference if care is not taken to avoid them (Zipkin et al., 2021). Big data in ecology are relatively new (Farley et al., 2018), and approaches to UAS-derived big data are fairly ad hoc across researchers (Wyngaard et al., 2019). Maintaining supportive communities of practice, such as the High Latitude Drone Ecology Network (https://arcticdrones.org/), can help overcome some of these idiosyncratic approaches. In the same vein, NEON provides an aspirational target for UAS educational resources, which are critical to ensuring that would-be NEON/UAS users have the environmental data science skills necessary to turn their data into inference (Hampton et al., 2017).
The proliferation of reasonably low-cost, off-the-shelf, drone-ready sensors (many designed for precision agriculture use) creates a need for validation of whether those instruments produce “science-grade” data (which itself is a relative term, depending on what the specific science requirements are for a given project). This validation may be achieved via direct comparison of the low-cost sensors with “state-of-the-science” instruments using coincident flights (e.g., Fawcett et al., 2020). Clear documentation of data provenance including sensor characteristics, data acquisition methods (e.g., flight pattern), and data acquisition conditions (e.g., time of day, cloud cover) will enable more rigorous data integration across instruments. Thus, integrating UAS operations with NEON can help anchor the community to the common currency of NEON data types, organization, and collection protocols, which will enhance the interoperability of UAS data.
Cyberinfrastructure for managing and processing UAS data is not yet built in a way that encourages consistency between projects or researchers. In this way, NEON again provides an aspirational example for how purpose-built cyberinfrastructure can facilitate macroecology. In fact, the foundational resources for building a valuable architecture for UAS data may already be represented in other NSF-sponsored projects (e.g., CyVerse, OpenTopography, and Open Science Framework). UAS-enabled research would benefit greatly from data storage solutions and streamlined analysis pipelines that are intentionally built to support a wide variety of users and use cases.
Critically, “accessibility” and “democratization” of macroecology encompass a broad, multifaceted notion of availability for and usability by anyone, and obstacles to accessibility extend beyond those we sought to remedy here. That is, our work to increase access to the elusive broad-extent/fine-grain observational domain with a mental model and an open workflow is an important but incomplete effort toward accessible macroecology. Illustrating this point, the reduced cost of Landsat images brought more researchers into the user base from lower resourced institutions and underrepresented parts of the world to do more topically diverse science (Nagaraj et al., 2020), but some barriers to access still exist (Miller et al., 2016). For instance, three quarters of users are men, and 65% of users are academic researchers (Miller et al., 2016). The Landsat archive was undeniably made more accessible to the collective benefit of science and society (Miller, 2016; Nagaraj et al., 2020), but even broader access (and therefore greater value; Miller, 2016) is possible. Greater accessibility of UAS and NEON as tools for macroecology will similarly require their user communities to be self-reflective and proactive about identifying and eliminating barriers to entry (Nagy et al., 2021).
- Filling in spatial scales missed by NEON data collection (e.g., collecting data on a similar vegetation type of a NEON site but outside of NEON's direct footprint, capturing data at spatial resolutions finer than 10 cm in order to measure post-disturbance vegetation recovery);
- Filling in temporal scales missed by NEON data collection (e.g., capturing data in a year when a NEON site is skipped by the AOP, capturing data at a site multiple times per year to understand how snowpack changes throughout the year, tracking individual plant phenology through time and linking to PhenoCam data, and understanding temporal trends in biodiversity);
- Opportunistic data collection (e.g., capturing data immediately after a disturbance event to measure its severity);
- Connecting NEON data to other Earth-observing systems using UAS data as a bridge (e.g., spectrally unmixing Landsat pixels to determine relative species compositions by matching UAS spectral measurements to NEON TOS field measurements; and coordinating NEON data collection with UAS and other data collection to expand the utility of NEON products (e.g., Chadwick et al., 2020; Wang et al., 2020));
- Supplementing NEON data using sensors that are not part of the NEON suite of sensors (e.g., thermal data to compare thermal regulation of different plant species, and measuring water stress in different trees across gradients of topoclimate);
- Validating lower cost, off-the-shelf payloads against the state-of-the-science NEON data collection (e.g., determining how well a multispectral imager designed for agriculture captures surface reflectance, and determining how well an algorithm detects the trees in a NEON vegetation structure plot);
- Replacing high-cost NEON AOP flights with lower cost alternatives (e.g., if the drone-derived data are “good enough” compared with the AOP, can we reduce the operational costs of the AOP?); and
- Using NEON data as a common currency for validating new methods (e.g., the case study we showed here, comparing a deep learning/orthomosaic-based approach and a variable window filter/CHM approach to detecting individual trees measured by the NEON TOS).
UAS can help ecologists harness the NEON data revolution with their complementary approach to measuring the understudied broad-extent/fine-grain observational domain. NEON's long-term, consistent, high-quality, continental-extent measurements enable data-driven discovery that is enhanced with new opportunities to explore cross-scale questions when paired with the relatively affordable, flexible measurements of UAS. We hope that by providing a mental model for data collection and integration, we remove some of the friction points associated with these tools and make them more accessible. Further democratizing macroecology will require community support for an open science ethos, which might include: low-cost cyberinfrastructure, open observatories, data networks, well-documented workflows, open education resources that increase data skills, and more inclusive practices that create opportunities for researchers across a diversity of career stages and institutions to participate in and contribute to “big data” macroecology. We envision NEON as an anchor for UAS-enabled ecology, with future research efforts that embrace the spirit of democratization and strive to broaden participation in this emerging discipline.
Funding for the 2019 NEON Science Summit was provided by NSF Award number 1906144. Additional funding was provided by Earth Lab through the University of Colorado, Boulder's Grand Challenge Initiative, the Cooperative Institute for Research in Environmental Sciences, and the US Geological Survey North Central Climate Adaptation Science Center. Data collection was supported by the USDA Forest Service Western Wildland Environmental Threat Assessment Center. Megan E. Cattau was supported in part by the National Aeronautics and Space Administration New Investigator Program (grant number 80NSSC18K0750). Jennifer K. Balch was supported in part by the NSF's CAREER (grant number 1846384) and Macrosystems (grant number 2017889) programs. The NEON is a program sponsored by the NSF and operated under cooperative agreement by Battelle. This material is based in part upon work supported by the NSF through the NEON Program. We thank Chelsea Nagy, Jeff Sloan, Natalie Latysh, Harland Goldstein, and two anonymous reviewers for their feedback, which greatly improved the manuscript. Any use of trade, firm, or product names is for descriptive purposes only and does not imply endorsement by the US Government.
CONFLICT OF INTEREST
The authors declare no conflict of interest.
|ecs24206-sup-0001-AppendixS1.pdfPDF document, 453.1 KB||
Please note: The publisher is not responsible for the content or functionality of any supporting information supplied by the authors. Any queries (other than missing content) should be directed to the corresponding author for the article.
- 2017. “Hyperspectral Imaging: A Review on UAV-Based Sensors, Data Processing and Applications for Agriculture and Forestry.” Remote Sensing 9: 1110.
- 2020. Forest Inventory and Analysis, Fiscal Year 2019 Business Report. Washington, DC: USDA Forest Service.
- 2013. “Lightweight Unmanned Aerial Vehicles Will Revolutionize Spatial Ecology.” Frontiers in Ecology and the Environment 11: 138–46.
- 2012. “Carnegie Airborne Observatory-2: Increasing Science Data Dimensionality via High-Fidelity Multi-Sensor Fusion.” Remote Sensing of Environment 124: 454–65.
- 2007. “Carnegie Airborne Observatory: In-Flight Fusion of Hyperspectral Imaging and Waveform Light Detection and Ranging (wLiDAR) for Three-Dimensional Studies of Ecosystems.” Journal of Applied Remote Sensing 1: 013536.
- 2013. “High-Fidelity National Carbon Mapping for Resource Management and REDD+.” Carbon Balance and Management 8: 7.
- 2019. “Vegetation Monitoring Using Multispectral Sensors Best Practices and Lessons Learned from High Latitudes.” Journal of Unmanned Vehicle Systems 7: 54–75.
- 2022. “Open-Source Tools in R for Forestry and Forest Ecology.” Forest Ecology and Management 503: 119813.
- 2020. “NEON Is Seeding the Next Revolution in Ecology.” Frontiers in Ecology and the Environment 18: 3.
- 2021. “ Exactextractr: Fast Extraction from Raster Datasets Using Polygons.” R package version 0.7.1. https://CRAN.R-project.org/package=exactextractr.
- 2012. “What's on the Horizon for Macroecology?” Ecography 35: 673–83.
- 2019. “Radiometric Calibration Assessments for UAS-Borne Multispectral Cameras: Laboratory and Field Protocols.” ISPRS Journal of Photogrammetry and Remote Sensing 149: 132–45.
- 2020. “Integrating Airborne Remote Sensing and Field Campaigns for Ecology and Earth System Science.” Methods in Ecology and Evolution 11: 1492–508.
- 2016. “Ultra-Fine Grain Landscape-Scale Quantification of Dryland Vegetation Structure with Drone-Acquired Structure-from-Motion Photogrammetry.” Remote Sensing of Environment 183: 129–43.
- 2015. “Optimal Altitude, Overlap, and Weather Conditions for Computer Vision UAV Estimates of Forest Structure.” Remote Sensing 7: 13895–920.
- 2013. “High Spatial Resolution Three-Dimensional Mapping of Vegetation Spectral Dynamics Using Computer Vision.” Remote Sensing of Environment 136: 259–76.
- 1993. “Cockpit Checklists: Concepts, Design, and Use.” Human Factors 35: 345–59.
- 2020. “Customizing Unmanned Aircraft Systems to Reduce Forest Inventory Costs: Can Oblique Images Substantially Improve the 3D Reconstruction of the Canopy?” International Journal of Remote Sensing 41: 3480–510.
- 2019. “Towards a Macroscope: Leveraging Technology to Transform the Breadth, Scale and Resolution of Macroecological Data.” Global Ecology and Biogeography 28: 1937–48.
- 2018. “Scales of Data.” Nature Ecology & Evolution 2: 769–70.
- 2018. “The Spatial and Temporal Domains of Modern Ecology.” Nature Ecology & Evolution 2: 819–26.
- 2018. “Situating Ecology as a Big-Data Science: Current Advances, Challenges, and Solutions.” BioScience 68: 563–76.
- 2020. “Multi-Scale Evaluation of Drone-Based Multispectral Surface Reflectance and Vegetation Indices in Operational Conditions.” Remote Sensing 12: 514.
- 2019. “Structure from Motion Photogrammetry in Ecology: Does the Choice of Software Matter?” Ecology and Evolution 9: 12964–79.
- 2018. “UAV Photogrammetry of Forests as a Vulnerable Process. A Sensitivity Analysis for a Structure from Motion RGB-Image Pipeline.” Remote Sensing 10: 912.
- 2021. “Innovations to Expand Drone Data Collection and Analysis for Rangeland Monitoring.” Ecosphere 12: e03649.
- 1999. “Rationale for a National Annual Forest Inventory Program.” Journal of Forestry 97: 16–20.
- 2017. “Unmanned Aerial Systems for Precision Forest Inventory Purposes: A Review and Case Study.” The Forestry Chronicle 93: 71–81.
- 2017. “Google Earth Engine: Planetary-Scale Geospatial Analysis for Everyone.” Remote Sensing of Environment 202: 18–27.
- 2004. “Forget Politicizing Science. Let's Democratize Science!” Issues in Science and Technology 21: 25–8.
- 2017. “Skills and Knowledge for Data-Intensive Environmental Research.” Bioscience 67: 546–57.
- 2013. “Big Data and the Future of Ecology.” Frontiers in Ecology and the Environment 11: 156–62.
- 2013. “High-Resolution Global Maps of 21st-Century Forest Cover Change.” Science 342: 850–3.
- 2014. “Macrosystems Ecology: Understanding Ecological Patterns and Processes at Continental Scales.” Frontiers in Ecology and the Environment 12: 5–14.
- 2008. “Shedding Light on the Dark Data in the Long Tail of Science.” Library Trends 57: 280–99.
- 2021a. “ Terra: Spatial Data Analysis.” R package version 1.4-22. https://CRAN.R-project.org/package=terra.
- 2021b. “ Raster: Geographic Data Analysis and Modeling.” R package version 3.5-2. https://CRAN.R-project.org/package=raster.
- 2020. “Three-Dimensional Digital Mapping of Ecosystems: A New Era in Spatial Ecology.” Proceedings of the Royal Society B: Biological Sciences 287: 20192383.
- 2014. “Mitigating Systematic Error in Topographic Models Derived from UAV and Ground-Based Image Networks.” Earth Surface Processes and Landforms 39: 1413–20.
- 2021. “Neonhs: Work with NEON AOP Hyperspectral Data.” R package version 0.0.9999. https://github.com/earthlab/neonhs.
- 2021. “Of Course we Fly Unmanned—We're Women!” Drones 5: 21.
- 2010. “NEON: The First Continental-Scale Ecological Observatory with Airborne Remote Sensing of Vegetation Canopy Biochemistry and Structure.” Journal of Applied Remote Sensing 4: 043510.
- 2015. NEON Imaging Spectrometer Radiance to Reflectance Algorithm Theoretical Basis Document. Boulder, CO: National Ecological Observatory Network.
- 2008. “A Continental Strategy for the National Ecological Observatory Network.” Frontiers in Ecology and the Environment 6: 282–4.
- 2010. “Teaching with Principles: Toward More Effective Pedagogy in Ecology.” Ecosphere 1: art15.
- 2013. “Hyperspectral Remote Sensing of Foliar Nitrogen Content.” Proceedings of the National Academy of Sciences 110: E185–92.
- 2021. “Cross-Scale Interaction of Host Tree Size and Climatic Water Deficit Governs Bark Beetle-Induced Tree Mortality.” Nature Communications 12: 129.
- 2022. “Neon-Drone-Workflow.” OSF. https://doi.org/10.17605/OSF.IO/ENBWU
- 2018. “Mapping Forest Structure Using UAS inside Flight Capabilities.” Sensors (Basel, Switzerland) 18: 2245.
- 2021. “The Evolution of Macrosystems Biology.” Frontiers in Ecology and the Environment 19: 11–9.
- 1996. “Patterns in Ecology.” Oikos 75: 145–7.
- 2019. “Hyperspectral Data Analysis in R: The hsdar Package.” Journal of Statistical Software 89: 1–23.
- 2014. “ Calibration of the National Ecological Observatory Network's Airborne Imaging Spectrometers.” In 2014 IEEE Geoscience and Remote Sensing Symposium, Quebec City, Canada, 2625–8. IEEE. https://doi.org/10.1109/IGARSS.2014.6947012.
- 1992. “The Problem of Pattern and Scale in Ecology: The Robert H. MacArthur Award Lecture.” Ecology 73: 1943–67.
- 2018. “Maintaining the Culture of Ecology.” Frontiers in Ecology and the Environment 16: 195–5.
- NEON (National Ecological Observatory Network). 2021. “ neonUtilities: Utilities for Working with NEON Data.” R package version 2.1.2. https://CRAN.R-project.org/package=neonUtilities.
- 2022. “ Ten Simple Rules for Working with High Resolution Remote Sensing Data.” OSF Preprints. Version 3. 1–30.
- 2016. Users and Uses of Landsat 8 Satellite Imagery Survey Results. USGS Numbered Series. Reston, VA: U.S. Geological Survey.
- 2016. “ The Value of Earth Observations: Methods and Findings on the Value of Landsat Imagery.” In Communicating Climate-Change and Natural Hazard Risk and Cultivating Resilience: Case Studies for a Multi-Disciplinary Approach, edited by J. L. Drake, Y. Y. Kontar, J. C. Eichelberger, T. S. Rupp, and K. M. Taylor, 223–37. Cham: Springer International Publishing.
- Musinsky, J., T. Goulden, G. Wirth, N. Leisso, K. Krause, M. Haynes, and C. Chapman. 2022. Spanning scales: The airborne spatial and temporal sampling design of the National Ecological Observatory Network. Methods in Ecology and Evolution n/a.
- 2020. “Improving Data Access Democratizes and Diversifies Science.” Proceedings of the National Academy of Sciences 117: 23490–8.
- 2021. “Harnessing the NEON Data Revolution to Advance Open Environmental Science with a Diverse and Data-Capable Community.” Ecosphere 12: e03833.
- National Ecological Observatory Network. 2020. “ geoNEON: Geolocation Data Access for NEON Data.” R package version 18.104.22.16800.
- 2019. “Enhancing UAV Model Accuracy in High-Relief Landscapes by Incorporating Oblique Images.” Remote Sensing 11: 239.
- 2018. “Mapping Three-Dimensional Structures of Forest Canopy Using UAV Stereo Imagery: Evaluating Impacts of Forward Overlaps and Image Resolutions with LiDAR Data as Reference.” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 11: 3578–89.
- NSF. 2013. “ National Ecological Observatory Network (NEON): Revolutionizing Ecological Research.” https://www.nsf.gov/publications/pub_summ.jsp?ods_key=nsf13088.
- 2017. “Research Debt.” Distillation 2: e5.
- 2021. Processing Coastal Imagery with Agisoft Metashape Professional Edition, Version 1.6, Structure from Motion Workflow Documentation. Open-File Report. Reston, VA: U.S. Geological Survey.
- 2017. “UAS, Sensors, and Data Processing in Agroforestry: A Review towards Practical Applications.” International Journal of Remote Sensing 38: 2349–91.
- 2018. “Simple Features for R: Standardized Support for Spatial Vector Data.” The R Journal 10: 439–46.
- 2020. “Changes to the Monitoring Trends in Burn Severity Program Mapping Production Procedures and Data Products.” Fire Ecology 16: 16.
- 2019. “Terrestrial Structure from Motion Photogrammetry for Deriving Forest Inventory Data.” Remote Sensing 11: 950.
- 2021. “ForestTools: Analyzing Remotely Sensed Forest Data.” R package version 0.2.5. https://CRAN.R-project.org/package=ForestTools.
- 2004. “Seeing the Trees in the Forest: Using LiDAR and Multispectral Data Fusion with Local Filtering and Variable Window Size for Estimating Tree Height.” Photogrammetric Engineering and Remote Sensing 70: 589–604.
- R Core Team. 2021. R: A Language and Environment for Statistical Computing. Manual. Vienna: R Foundation for Statistical Computing.
- 2020. “Low-Cost System for Radiometric Calibration of UAV-Based Multispectral Imagery.” Journal of Spatial Science: 1–15. https://doi.org/10.1080/14498596.2020.1860146.
- 1973. Monitoring the Vernal Advancement and Retrogradation (Green Wave Effect) of Natural Vegetation. Type II Report. Greenbelt, MD: Goddard Space Flight Center.
- Roussel, J. R., D. Auty, N. C. Coops, P. Tompalski, T. R. H. Goodbody, A. Sánchez Meador, J. F. Bourdon, F. De Boissieu, and A. Achim. 2020. lidR: An R package for analysis of Airborne Laser Scanning (ALS) data. Remote Sensing of Environment, 251 (August), 112061.
- 2021. “lidR: Airborne LiDAR Data Manipulation and Visualization for Forestry Applications.” R package version 3.2.3. https://cran.r-project.org/package=lidR.
- 2012. Data Backup Options. Washington, DC: United States Computer Emergency Response Team. www.us-cert.gov/sites/default/files/publications/data_backup_options.pdf.
- 2021. “Influence of Flight Altitude and Control Points in the Georeferencing of Images Obtained by Unmanned Aerial Vehicle.” European Journal of Remote Sensing 54: 59–71.
- 2013. NEON Observatory Design. Boulder, CO: National Ecological Observatory Network.
- 2020. “Integrating National Ecological Observatory Network (NEON) Airborne Remote Sensing and In-Situ Data for Optimal Tree Species Classification.” Remote Sensing 12: 1414.
- 2021. Spectral Reflectance Measurements of Radiometric Calibration Panels for UAS Image Calibration. Reston, VA: U.S. Geological Survey.
- 1999. “The Use of the Empirical Line Method to Calibrate Remotely Sensed Data to Reflectance.” International Journal of Remote Sensing 20: 2653–62.
- 2021. “Influence of Flight Parameters on UAS-Based Monitoring of Tree Height, Diameter, and Density.” Remote Sensing of Environment 263: 112540.
- 2018. “Considerations for Achieving Cross-Platform Point Cloud Data Fusion across Different Dryland Ecosystem Structural States.” Frontiers in Plant Science 8: 1–13.
- 2021. “Influence of Agisoft Metashape Parameters on UAS Structure from Motion Individual Tree Detection from Canopy Height Models.” Forests 12: 250.
- 2018. “Assessing UAV-Collected Image Overlap Influence on Computation Time and Digital Surface Model Accuracy in Olive Orchards.” Precision Agriculture 19: 115–33.
- 1989. “Landscape Ecology: The Effect of Pattern on Process.” Annual Review of Ecology and Systematics 20: 171–97.
- 2015. “Barriers to Adding UAVs to the Ecologist's Toolbox.” Frontiers in Ecology and the Environment 13: 74–5.
- 2015. “A Simplified Empirical Line Method of Radiometric Calibration for Small Unmanned Aircraft Systems-Based Remote Sensing.” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 8: 1876–85.
- 2020. “Foliar Functional Traits from Imaging Spectroscopy across Biomes in Eastern North America.” New Phytologist 228: 494–511.
- 2021. “A Benchmark Dataset for Canopy Crown Detection and Delineation in Co-Registered Airborne RGB, LiDAR and Hyperspectral Imagery from the National Ecological Observation Network.” PLoS Computational Biology 17: e1009180.
- 2019. “Individual Tree-Crown Detection in RGB Imagery Using Semi-Supervised Deep Learning Neural Networks.” Remote Sensing 11: 1309.
- 2020. “Cross-Site Learning in Deep Learning RGB Tree Crown Detection.” Ecological Informatics 56: 101061.
- 2012. “‘Structure-From-Motion’ Photogrammetry: A Low-Cost, Effective Tool for Geoscience Applications.” Geomorphology 179: 300–14.
- 2019. “Emergent Challenges for Science sUAS Data Management: Fairness through Community Engagement and Best Practices Development.” Remote Sensing 11: 1797.
- 2022. “Optimizing Aerial Imagery Collection and Processing Parameters for Drone-based Individual Tree Mapping in Structurally Complex Conifer Forests.” Methods in Ecology and Evolution 13: 1447–63.
- 2016. “An Easy-to-Use Airborne LiDAR Data Filtering Method Based on Cloth Simulation.” Remote Sensing 8: 501.
- 2020. “Analysis of UAS Flight Altitude and Ground Control Point Parameters on DEM Accuracy along a Complex, Developed Coastline.” Remote Sensing 12: 2305.
- 2021. “Addressing Data Integration Challenges to Link Ecological Processes across Scales.” Frontiers in Ecology and the Environment 19: 30–8.