I’ve had a fairly rough 9 days with a very troublesome model.
My original hypotheses are rejected. A piece of the world doesn’t really work the way that I expected.
The great news is that I’m forced to look beyond the clean dataset and write new hypotheses. Even failures can be great. However, it doesn’t make for good commercial reading. Instead of having that nice, clean, nugget:
Brands that did x realized y.
There’s a much messier message:
Neither a, b, c, d, e, f, g, h, i , j, k, l, m, n, nor p had a significant impact on y.
That messier message works among marketing scientists. Usually a sound of surprise. Then acceptance when they see the summary tables.
It’s not commercially actionable.
It’s far more effect to give very clear ‘to do’ recommendations than clear ‘do not’ recommendations. Memory and recall is precious. It’s hard to get things to stick and even harder to fish it out. A laundry list of ‘not significants’ is not effective. Moreover, being unethical and pulling out a statistically insignificant term doesn’t quite settle it, either.
So instead, tomorrow, I’ll have to change the dependent variable. Y will be e. Or f. or i. It’s a lot more work, but there are actionable recommendations in there. It has to be commercially interesting- knowing full well that if I poke without hypothesis in mind the odds of being fooled by randomness increases. And, I’m energized by having more justification for a chosen paradigm of social media analytics.
In sum, it’s been rough. And I’m charging on.
It’s what we do.