2015 was a pilot year for Crab Team monitoring. Here, in a series of two posts, Natalie White, an undergraduate in the UW Program on the Environment Capstone Program, will share the work she did to understand the volunteer experience during our pilot year. In a third post, we’ll fill you in on how we’ve responded to this information, and what we learned by listening.
With the official 2016 program launch upon us, now is a great time to reflect on the pilot season of Washington Sea Grant’s Crab Team monitoring program! As a new citizen science program that has the potential to help protect the Puget Sound region from the claws of the European green crab, it’s important to take a step back and evaluate the program and the experience volunteers have while participating. Last summer, as a Capstone student at the University of Washington, I had the privilege of working with WSG on the development of this program. Determined to narrow the focus of my project, I asked myself, “what can I do to contribute to the launching of a successful program?” To answer this, I considered some of the problems and limitations in citizen science.
Two major obstacles limit perception of citizen science as an effective way to answer big questions:
- Some people don’t trust non-scientists to collect data of high quality; and
- Programs can be costly to run if volunteer retention is low and staff are constantly training new volunteers.
Both of these obstacles relate to how volunteers experience training, meaning, if, after undergoing training, volunteers don’t feel prepared to collect data, it’s probably because they need more support! And if volunteers feel like they aren’t doing a good job, they are less likely to stay with the program. With this in mind, I decided to focus on the experience of volunteers as they train and prepare to sample independently, with the goal of improving the training volunteers receive and simultaneously tackling some of those remaining obstacles!
Now, what exactly did 2015 Crab Team training involve? To kick things off, volunteers attended a 2-3 hour training workshop. During this time, they were introduced to the program, learned about the green crab invasion threat and how they can help, practiced the protocols in a mock estuary habitat, and learned how to identify green crabs and other species they might see. Definitely a full day! Following this initial training, a WSG staff member accompanied volunteers to their assigned site, where together they completed their first round of sampling. These site visits allowed volunteers to get to know their site and work out any site-specific questions. Volunteers also received a species identification guide and a detailed protocol booklet to help them out in the field. After these sessions, volunteers took the reins!
So, how did I go about evaluating volunteers’ experience during these training sessions? I first formulated pre- and post-assessments for volunteers to take at the initial training workshops. These questions tested volunteers on the knowledge gained from the training itself and also gave them an opportunity to tell us what worked well and where improvements could be made. Because this training session is the introduction to the program, knowing where volunteers are having trouble, and what aspects they enjoy, is an essential part of retention.
Next, after volunteers had cast off their training wheels (after completing the site visit with Crab Team staff), I decided to play the role of silent observer, and accompanied several groups to their sites. As I watched the volunteers in action, I paid special attention to what steps seemed to be smooth sailing, and where any hiccups emerged. After they finished sampling for the day, I held a brief interview with the group, asking volunteers to reflect on the strengths and weaknesses of the protocol, and the program overall.
Finally, as an opportunity for further candid reflection, I also released an online survey for volunteers to fill out once they had one month’s worth of independent sampling under their belt. Responses to this survey were anonymous to permit volunteers to express themselves as honestly as possible. We wanted to make sure volunteers had plenty of chances to share their thoughts on the program.
Phew, that’s a lot of steps! Given that this is a new program, it is especially important to consider the experience of volunteers as they begin to participate. Why do they join? What do they get out of participating? How can we make participating easier, or more fun? This information not only keeps volunteers coming back, but also keeps them engaged in doing a good job, and collecting high-quality data that help effect real, positive change in the places they care about. So, while certainly time consuming, these steps are important to launching a successful citizen science program!
Stay tuned for the next post, where I’ll share with you what I found from all of this research!