Survey Methods and On-Line and Off-Line Thinking
I’m on the final chapter of what has been a very difficult read: “Language and Human Behavior” by Bickerton.
He tackles some very difficult concepts in a clear cut way, with frequent deep dives into certain pockets of goodness. It’s hard read because it’s very dense, and perhaps I’m not horribly familiar with the subject matter.
The material in there about consciousness and the notions of On-Line thinking and Off-Line thinking are driving this post. I haven’t figured out a way of expressing the differences in one paragraph or less without Bickerton finding out and reaming me out for getting it not quite right.
Into the meat of the post:
I frequently draw the line between observed behavior and reported behavior. One of the reasons for my caution with online satisfaction surveys is because it’s reported by the user and frequently involves some form of prospection.
In an obscure reference, the Canadian Election Study, if taken at face value, would predict voter turnout several percentage points higher than it actually is at the ballot box.
That is to say, the survey predicts, based on the questions “Will you vote” and the post-survey “Did you vote” – a much higher rate of turnout than what really happened.
So, is the opt-in sample skewed (A person who is likely to fill out a massive survey about politics is naturally more inclined to vote anyway) or are people just very bad about prospection? (I told what I believed was the truth: I will vote. But the odds of me actually going to vote on voting day will be low.),
Or – did the survey actually raised some form of awareness in the person and made it more likely that they would actually vote: and the self-reported voting rate actually happens to agree with what actually happened to them at the ballot box. (Ie. they’re telling the truth about their turnout).
I’ve frequently argued, quite unsuccessfully I might add, that a survey is unto itself a form of user experience that impacts perception. An on-site survey is one of the few ways that people can actually communicate with a company. After years of combing through comments and applying longtail analysis it becomes readily clear that a comments box is some sort of a cross between a help-desk box and an invitation to engage in 4Chan anonymous behavior.
Customers frequently see companies as being monolithic. Why wouldn’t they? And why shouldn’t expect a survey to be some form of vital communication instead of a research tool to make things better. Customers don’t care. And I happen to agree with them.
It’s for this reason that ‘voice of the customer’ online survey software is to be treated as a proxy for the truth and not as gospel. It has uses, to be sure, but it should be handled with care. The feedback contained within the survey is valid, and if the survey is constant over time – it can be used as a KPI. It has ‘internal validity’, but I’d become really uncomfortable about taking a sample size of 1000 and asking them “will you buy this product” and applying that rate against all visitation to the website. At least you’re not guessing. (And we don’t guess). But it is very dirty.
It’s something.
I wouldn’t bet the farm on a survey though.
The best feedback is observed. If you want to know what people really think and how they really feel – one should focus on watching them.
So – to tie this on back to Bickerton:
I prefer recording observed behavior because the user remains in a state of On-Line thinking. To borrow from physics: I’m not changing the position of an electron by measuring its speed.
Surveys have their place, to be sure, but they’re inferior compared to other methodologies.
4 thoughts on “Survey Methods and On-Line and Off-Line Thinking”
I’m with you in the main, surveys versus observed behavior in isolation, behavior every time.
However, I think surveys can be very useful and even precise (not accurate) when tied to actual behavior first. This is the mistake many people make online; you want to survey segments of people with *known* behavior, or track behavior after the survey and then tie back.
The most classic case of this I witnessed was 2 customer segments, one said they were likely to buy, the other said they were not likely to buy. The actual behavior going forward after the survey was the exact opposite; those who said not likely bought much more often than those who said they were likely (it’s a LifeCycle thing, catch me sometime and I’ll elaborate…)
The point: even though these customers were terrible predictors of their own behavior, they were *consistently* wrong, which was an important discovery in terms of the psychology and successfully marketing to them.
Said another way, you could *predict* the behavior based on the survey, even though the behavior was the opposite of stated.
So faced with a choice, I always go with observed behavior over surveys, vecause the surveys are frequently not representative of what actually happens.
Given the luxury of doing surveys correctly – tied to known behavior before or after – I love having survey data.
@jimnovo
I’m inclined to agree with you.
You point out an important point: even though people are terrible predictors of their own behavior, if they’re consistently wrong about it – then yes, it’s exploitable.
Problem is: we’ve been taught to teach survey feedback as gospel. And this is a very difficult perception to break. Not to say that it should be broken or anything. It would be a very uphill climb.
The place where I work lives and breathes surveys. They are essentially the only arrow in our quiver.
For old school researchers, they are the only window on the customer.
Do either of you have any suggestions regarding literature that might persuade them that other methods might be better?
Also, can you define observed behaviour studies: ethno or in-situ studies? IDIs?
I find that most focus groups have weaknesses similar to surveys, without the statistical significance and ability to replicate results.
@Mark
If you can unify survey data with the web analytics – there’s power there. Because then you can see the divergence between reported attitudes and prospection and observed results.
This leads into all sorts of PII and ethical problems. Not sure if I want.
NextStage has a good position on this.
Comments are closed.