Thursday, February 2, 2012

The Survey Research Prisoner's Dilemma: Who's Going to Put the Brakes on Bad Survey Design?

Not too long ago, a Survey Monkey link came up in my Facebook feed.  As you may have deduced, I run a company that (among other things) specializes in survey research, so I clicked the link out of curiosity. Here’s what I saw on the splash page:
By all practical indications, this survey is not intended for me.  It’s targeting community organizers and neighborhood council members, people who volunteer their time for the city’s neighborhoods, and who probably know a thing or two about what gets prioritized, and what doesn’t.

I’m not one of those people.
I read the intro and instructions out of curiosity, expecting to get screened out. But I wasn’t screened out.  Instead, I began to receive questions that I’m uniquely unqualified to answer. Questions that are setting out to prioritize the budget of Los Angeles, the 2nd largest city in the United States.

This survey underscores a giant flaw in the direction in which survey research is heading: everyone is a survey writer these days.  A few choice shortcomings of this particular survey:
·         There’s no screener.  Anyone can take the survey.  Including me.  Or, more amusingly, someone who does not actually live in the city of Los Angeles.  It doesn’t know I live in LA, because right now, at this very moment, I’m in Ohio. And my IP address will indicate that.

·         It’s not gated.  I took the whole survey. And then I was able to take it again.  Remember now, the topic is the city budget.  I can think of a few stakeholders who’d want to weigh in on that more than once.  Ballot box stuffing, anyone?

·         The survey is unprotected.  Most people are probably too lazy to take this whole thing, but that doesn’t matter – they can still aimlessly click through and screen cap it.  For a political office, this survey represents a potential PR nightmare.  (To which I am actively contributing, guilty as charged.)  My fair research colleagues -- this is how your weakly worded questions make the news.  I’ve had journalists screen capture and post my surveys in their columns before (such is the trade hazard of conducting web intercept surveys on news websites).  And that was stressful even when the questions were well-written, and the journalist was reasonably kind.

It’s tempting to use these self-serve tools to save money; I myself have gone that route when a “real” budget was not in the cards. But when we get to the point that major financial decisions are made based on research conducted by non-researchers using highly flawed methodologies – we need to start taking action to stop it.  Weak research from a select few gives our entire industry a bad reputation.  It causes million dollar mistakes.  And that’s a much bigger deficit than what it would have cost to conduct this survey the right way, with the right professional discipline.
The mayor’s office survey is hardly an isolated incident.  The Grey Matter online research report I mentioned in my last blog post alludes to questionnaire design issues with a few amusing anecdotes that it found in its research.  Among my favorites were the hilariously broad question, “What characters, movies, or brands do you like?”, the grammar cringe worthy, “Of LCD TV, that you currently own, what type does you mainly watch?”, and a straight guy being asked how he dresses his husband.

A lot of talk goes into other issues in questionnaire design – long surveys, question styles that can be taken on mobile devices, and so on.  These are all important issues to sort out, but what about the greater issue at stake here:  the one of amateurs taking on a business function that could lead to major business mistakes?   
I’m pained by what I see as the Survey Monkeyization of survey research. (And I say this as someone with friends – people I genuinely like and respect – who work there.)  Like any professional discipline, good questionnaire design takes training and practice.  And online research has made it easy for weak designs to infiltrate the industry, annoy respondents, decrease response rates, and lead to the wrong insights. It makes everyone skeptical of our trade. It devalues what we do.

It’s time we all take action to stop the madness.  Non-researchers - if you've made it this far, thank you for taking the time to understand that asking questions to a convenience sample of your friends or alumni list does not make you a professional researcher any more than blogging makes me a professional publicist.  Spread that word, and seek out the expertise of people who know the pros and cons of a rating vs. a ranking and how to handle challenges like gender response bias. Fellow researchers – speak up when you see these egregiously bad surveys; lend your insights to those who lack your experience.  And if you work for a panel, screen the surveys before you set them live – don’t just move forward with something that feels wrong in your gut.  In the economic game of market research, that lands us in that ugly quadrant where nobody wins.

-Kerry

10 Steps to Managing Online Panels

Grey Matter Research recently released a report entitled, “More Dirty Little Secrets of Online Panels,” where some clever industrial espionage revealed a range of inconsistencies and questionable business practices on the part of online panels.  (Email me if you want a copy: kerry@researchnarrative.com).  Among the key findings:

·       Panels regularly subcontract sample to other panels, and don’t tell clients.
·       The frequency cap for survey invitations seems to be increasing – panels now typically invite their panelists at least weekly, and sometimes multiple times per day, to take surveys.
·       Respondents can become trained to understand questionnaire scales, and their reaction change as a result of taking surveys in the past.
·       Panels willingly field terribly designed surveys.
·       It’s easy to join upwards of a dozen panels.

I was fortunate to be in on the ground floor of online research back in the 90s, and to that end, I’m a bit jaded – nothing in the report was really news to me.  Many of these talking points have been around for at least a decade, but there are far more online research players today – many with dubious business practices. It takes a lot more work to monitor it now, and the responsibility lies with all of us. I educate my vendors (online sample providers or otherwise) all the time, and I expect them to do the same for me.   

It’s easy to think of sample as a commodity, but it’s not.  Panels have reputations, strengths and weaknesses, and built in biases that can work to your benefit or detriment, depending on the study. Here are some of the strategies I’ve used for minimizing the negative issues outlined in the report.

1.      Dig beyond the bid.  Ask questions of panel partners. Find out how they recruit, how they frequency cap, what their invitations look like, etc.

2.      Join their panels. Be a spy.  Make sure they practice what they preach.  Call them out on it when they don’t. 

3.      Tell them upfront that they can’t outsource/subcontract, or must get approval from you first. Don’t assume that quality panels don’t do this. Most of them do – out of necessity.

4.     Bring in multiple sample partners on a study yourself.  Tell them you’re doing it, and who you’re using.  Then compare them.  It puts them on good behavior and gives you insight on how each panel operates. 

5.     Pretest/soft launch every study.  Check data.  Look for inconsistencies.  Look for places with heavy dropout.  Adjust course accordingly before full fielding.  We’re in the business of market research, not emergency medicine, the extra day this takes will be immaterial 99% of the time.

6.     Be wary if they don’t ask you questions.  Good panels will have AMs who take your surveys, give you feedback, provide insight into incentive necessity, and generally demand that you don’t abuse their panelists.  Bad panels will say yes to whatever ridiculous request you have, usually because they’re desperate for business and/or have inexperienced employees who don’t know better.  It’s a good litmus test to know if you should turn and run.

7.     Don’t just go with the cheapest sample bidder.  There’s usually a reason they’re cheaper, and it’s probably not one you’ll want to deal with on the backend.

8.     Every now and then, put a question on the survey asking what panels people belong to.  This gives you an idea of where there’s overlap, and how much the heavy panel subscribers differ from people on only 1 or 2 panels. (I’ve seen this matter a lot, and I’ve seen it matter not at all.)

9.     Make sure you and they can set and track quotas.  Monitor it.  This avoids the dubious statistical manipulations that try to project 10 people to 100. 

10.  Add past participation screeners on surveys where you fear a recent participation effect.  This isn’t always relevant, but as pointed out in this report, it matters for normative testing. Well-designed panels will also be able to target their invitations to only reach people who have not recently participated in (your survey/a similar survey).  That doesn’t mean respondents haven’t taken it through another panel, but it’s an extra line of defense.

The world of the RDD gold standard is long gone.  But online panels can provide brilliant, thoughtful, and valuable insights if you manage the research correctly.