60,000 emails yields (Guess… how many?) respondents
Two recent online studies needed to be completed quickly. Because they required larger numbers of participants than usual and time was extremely limited, they were recruited via email blasts. The two projects involved similar participants but were very different in methodology. One was a 15-minute online survey and the other a 90-minute online journal/bulletin board. Each used its own database of 12,000 potential participants and each offered what was considered a “reasonable” cash incentive.
The goal for each study was roughly 50 participants. Do you think that multiple email blasts to 12,000 potential participants for each study fulfilled either goal?
Whether you voted yes or no you are right. The short survey met the goal but the more involved online bulletin board study only resulted in 30 completions. The differences between the two studies were instructive. Here is some data to consider.
|Number of blasts||3 blasts to 12,000 addresses||2 blasts to 12,000 addresses|
|Length of research process||15-minutes||90-minutes|
|Amount of initial incentive||$20||$75|
|Amount of increased incentive||$35||$100|
Note that both studies required multiple email blasts. Each blast generated activity for about 24 hours, after which the number of participants clicking the invitation slowed dramatically.
Also note that the incentive for both studies was increased when it became clear that the original amount offered was not compelling enough to generate the desired result.
What did I learn? Well, this is what I will do differently next time:
- Incentivise generously rather than reasonably. Each email blast costs over $1,000. Increasing the incentive from a reasonable to a generous rate costs about half this amount. If the higher incentive results in a full study with fewer email blasts, then I have saved money and I have happier and more motivated participants.
- Blast more participants the first time. I did not choose to use the entire available universe of participants because I thought that 12,000 would be more than sufficient. I assumed that at least one-percent would click the invitation and that about half of them would qualify. In reality a much higher number “clicked in” but a majority of them never actually began answering the qualifying questions or abandoned the process after answering the first few. Based on results, a more realistic assumption might be that .05% will be motivated enough to complete the qualifying questions. Of course the incidence of those who answer all the questions and actually qualify will depend on the nature of the questions.
- Conduct a test blast. A message sent to a small number of participants would help to gauge the likely incidence when the same offer is blasted to the full sample. This type of test, which would take roughly 24 hours, would enable me to more accurately gauge the right number of potentials to include and the amount of incentive to offer to fulfill my goal with a single blast.