Benchmark bird surveys help quantify counting accuracy in a citizen-science database

Allbwn ymchwil: Cyfraniad at gyfnodolynErthygladolygiad gan gymheiriaid

StandardStandard

Benchmark bird surveys help quantify counting accuracy in a citizen-science database. / Robinson, W Douglas; Hallman, Tyler A; Hutchinson, Rebecca A.
Yn: Frontiers in Ecology and Evolution, Cyfrol 9, 568278, 10.02.2021.

Allbwn ymchwil: Cyfraniad at gyfnodolynErthygladolygiad gan gymheiriaid

HarvardHarvard

Robinson, WD, Hallman, TA & Hutchinson, RA 2021, 'Benchmark bird surveys help quantify counting accuracy in a citizen-science database', Frontiers in Ecology and Evolution, cyfrol. 9, 568278. https://doi.org/10.3389/fevo.2021.568278

APA

Robinson, W. D., Hallman, T. A., & Hutchinson, R. A. (2021). Benchmark bird surveys help quantify counting accuracy in a citizen-science database. Frontiers in Ecology and Evolution, 9, Erthygl 568278. https://doi.org/10.3389/fevo.2021.568278

CBE

MLA

VancouverVancouver

Robinson WD, Hallman TA, Hutchinson RA. Benchmark bird surveys help quantify counting accuracy in a citizen-science database. Frontiers in Ecology and Evolution. 2021 Chw 10;9:568278. doi: 10.3389/fevo.2021.568278

Author

Robinson, W Douglas ; Hallman, Tyler A ; Hutchinson, Rebecca A. / Benchmark bird surveys help quantify counting accuracy in a citizen-science database. Yn: Frontiers in Ecology and Evolution. 2021 ; Cyfrol 9.

RIS

TY - JOUR

T1 - Benchmark bird surveys help quantify counting accuracy in a citizen-science database

AU - Robinson, W Douglas

AU - Hallman, Tyler A

AU - Hutchinson, Rebecca A

PY - 2021/2/10

Y1 - 2021/2/10

N2 - The growth of biodiversity data sets generated by citizen scientists continues to accelerate. The availability of such data has greatly expanded the scale of questions researchers can address. Yet, error, bias, and noise continue to be serious concerns for analysts, particularly when data being contributed to these giant online data sets are difficult to verify. Counts of birds contributed to eBird, the world’s largest biodiversity online database, present a potentially useful resource for tracking trends over time and space in species’ abundances. We quantified counting accuracy in a sample of 1,406 eBird checklists by comparing numbers contributed by birders (N = 246) who visited a popular birding location in Oregon, USA, with numbers generated by a professional ornithologist engaged in a long-term study creating benchmark (reference) measurements of daily bird counts. We focused on waterbirds, which are easily visible at this site. We evaluated potential predictors of count differences, including characteristics of contributed checklists, of each species, and of time of day and year. Count differences were biased toward undercounts, with more than 75% of counts being below the daily benchmark value. Median count discrepancies were −29.1% (range: 0 to −42.8%; N = 20 species). Model sets revealed an important influence of each species’ reference count, which varied seasonally as waterbird numbers fluctuated, and of percent of species known to be present each day that were included on each checklist. That is, checklists indicating a more thorough survey of the species richness at the site also had, on average, smaller count differences. However, even on checklists with the most thorough species lists, counts were biased low and exceptionally variable in their accuracy. To improve utility of such bird count data, we suggest three strategies to pursue in the future. (1) Assess additional options for analytically determining how to select checklists that include less biased count data, as well as exploring options for correcting bias during the analysis stage. (2) Add options for users to provide additional information that helps analysts choose checklists, such as an option for users to tag checklists where they focused on obtaining accurate counts. (3) Explore opportunities to effectively calibrate citizen-science bird count data by establishing a formalized network of marquis sites where dedicated observers regularly contribute carefully collected benchmark data.

AB - The growth of biodiversity data sets generated by citizen scientists continues to accelerate. The availability of such data has greatly expanded the scale of questions researchers can address. Yet, error, bias, and noise continue to be serious concerns for analysts, particularly when data being contributed to these giant online data sets are difficult to verify. Counts of birds contributed to eBird, the world’s largest biodiversity online database, present a potentially useful resource for tracking trends over time and space in species’ abundances. We quantified counting accuracy in a sample of 1,406 eBird checklists by comparing numbers contributed by birders (N = 246) who visited a popular birding location in Oregon, USA, with numbers generated by a professional ornithologist engaged in a long-term study creating benchmark (reference) measurements of daily bird counts. We focused on waterbirds, which are easily visible at this site. We evaluated potential predictors of count differences, including characteristics of contributed checklists, of each species, and of time of day and year. Count differences were biased toward undercounts, with more than 75% of counts being below the daily benchmark value. Median count discrepancies were −29.1% (range: 0 to −42.8%; N = 20 species). Model sets revealed an important influence of each species’ reference count, which varied seasonally as waterbird numbers fluctuated, and of percent of species known to be present each day that were included on each checklist. That is, checklists indicating a more thorough survey of the species richness at the site also had, on average, smaller count differences. However, even on checklists with the most thorough species lists, counts were biased low and exceptionally variable in their accuracy. To improve utility of such bird count data, we suggest three strategies to pursue in the future. (1) Assess additional options for analytically determining how to select checklists that include less biased count data, as well as exploring options for correcting bias during the analysis stage. (2) Add options for users to provide additional information that helps analysts choose checklists, such as an option for users to tag checklists where they focused on obtaining accurate counts. (3) Explore opportunities to effectively calibrate citizen-science bird count data by establishing a formalized network of marquis sites where dedicated observers regularly contribute carefully collected benchmark data.

U2 - 10.3389/fevo.2021.568278

DO - 10.3389/fevo.2021.568278

M3 - Article

VL - 9

JO - Frontiers in Ecology and Evolution

JF - Frontiers in Ecology and Evolution

SN - 2296-701X

M1 - 568278

ER -