I’m on the final chapter of what has been a very difficult read: “Language and Human Behavior” by Bickerton.
He tackles some very difficult concepts in a clear cut way, with frequent deep dives into certain pockets of goodness. It’s hard read because it’s very dense, and perhaps I’m not horribly familiar with the subject matter.
The material in there about consciousness and the notions of On-Line thinking and Off-Line thinking are driving this post. I haven’t figured out a way of expressing the differences in one paragraph or less without Bickerton finding out and reaming me out for getting it not quite right.
Into the meat of the post:
I frequently draw the line between observed behavior and reported behavior. One of the reasons for my caution with online satisfaction surveys is because it’s reported by the user and frequently involves some form of prospection.
In an obscure reference, the Canadian Election Study, if taken at face value, would predict voter turnout several percentage points higher than it actually is at the ballot box.
That is to say, the survey predicts, based on the questions “Will you vote” and the post-survey “Did you vote” – a much higher rate of turnout than what really happened.
So, is the opt-in sample skewed (A person who is likely to fill out a massive survey about politics is naturally more inclined to vote anyway) or are people just very bad about prospection? (I told what I believed was the truth: I will vote. But the odds of me actually going to vote on voting day will be low.),
Or – did the survey actually raised some form of awareness in the person and made it more likely that they would actually vote: and the self-reported voting rate actually happens to agree with what actually happened to them at the ballot box. (Ie. they’re telling the truth about their turnout).
I’ve frequently argued, quite unsuccessfully I might add, that a survey is unto itself a form of user experience that impacts perception. An on-site survey is one of the few ways that people can actually communicate with a company. After years of combing through comments and applying longtail analysis it becomes readily clear that a comments box is some sort of a cross between a help-desk box and an invitation to engage in 4Chan anonymous behavior.
Customers frequently see companies as being monolithic. Why wouldn’t they? And why shouldn’t expect a survey to be some form of vital communication instead of a research tool to make things better. Customers don’t care. And I happen to agree with them.
It’s for this reason that ‘voice of the customer’ online survey software is to be treated as a proxy for the truth and not as gospel. It has uses, to be sure, but it should be handled with care. The feedback contained within the survey is valid, and if the survey is constant over time – it can be used as a KPI. It has ‘internal validity’, but I’d become really uncomfortable about taking a sample size of 1000 and asking them “will you buy this product” and applying that rate against all visitation to the website. At least you’re not guessing. (And we don’t guess). But it is very dirty.
I wouldn’t bet the farm on a survey though.
The best feedback is observed. If you want to know what people really think and how they really feel – one should focus on watching them.
So – to tie this on back to Bickerton:
I prefer recording observed behavior because the user remains in a state of On-Line thinking. To borrow from physics: I’m not changing the position of an electron by measuring its speed.
Surveys have their place, to be sure, but they’re inferior compared to other methodologies.