A Forecast uses a statistical method and historical data to make a statement about what is likely to occur in the future. A Scenario uses a generalized model, in the absence of contextual historical data, to make a statement about what is likely to occur in the future. A Target is a statement about what what the future should be. The linkages between forecasts, scenarios, and target setting are subject to all sorts of phenomenon. Anchor-and-Adjust, optimistic thinking, convenient reasoning, and prospection error all come into play. The gap between a target and the predicted future is either a source for dissatisfaction or for celebration. One uses a forecast to minimize that error, and, ideally, to be smarter going in.[…]

I encounter a lot of artifacts of analytics communication: dashboards, ppt decks, and excel files. You can tell a lot about an organization from such artifacts. You can see sandbagging. You can see staff transfers riddled throughout some of them, and you can sense the ghosts of analysts promoted or churned. You can definitely see the ghosts of EVP’s long gone. You can sometimes make out the intended audience, the originally intended audience, and how incredibly diluted something became over time. An analytics report is akin to sand on the beach. Sometimes the tide comes in and scrubs away the footprints. Much more frequently those footprints add up, muddle the situation, and then fossilize. Why it happens and a possible[…]

The Panda Cheese commercials are brilliant, and I’d like to believe, a product of scientific advertising. I have no basis for that, but I’m heaping praise on the creative and the analyst who worked on it. You can see the series of commercials here. Specific elements: Divergent use of a violent panda. Repetitious use of a song across all five ads. Consistent direction (ie. over the head reaction shots from Panda POV). Desired behavior demonstrated (“Get another one…”). Divergent tag line, phrased in the negative. “Never Say No To Panda”, which contradicts the affirmative bias we’ve had for years. Brilliant – check it out.  

I cut the cable tomorrow. For specific firm, I will go from being worth a stable $170/month subscriber, complete with PVR, to being worth nothing. I’m switching my Internet to a non-UBB restricted wholesaler. I will continue to spend $10/month for Netflix. I will get my live TV with the “free”, Over-The-Air broadcast signal from CN tower, which I have a clear view from. Dedicated ad impressions will take a pretty big hit, as the number of must-see, full attention shows are less than 5. I can’t anticipate myself suffering through TV without a PVR. I can’t imagine deliberately exposing myself to an abusive medium any longer. That attitude ought to concern broadcasters and marketers alike. I’m not alone in[…]

The paper “Increasing Campaign Effectiveness”, abbreviated ICE, is out. You can find the paper here. ICE is not the successor to Value of a Fan, abbreviated VOAF. We asked different questions. Last year, in response to VOAF, many of my cohorts came forward with brilliant follow up questions, and the dialogue that ensued contributed to the subsequent study and model design. Work continues. I welcome, in the spirit laid out by Tsang, engagement on the topic. What do you think about Increasing Campaign Effectiveness using social media? What would you consider and explore?

Intelligence means selective ignorance. Imagine how intelligent and ignorant we used to be as a people, just 120 years ago. Some of the first uses of sampling techniques in quantitative methods centered around the use of alcohol in society. They really didn’t have very much machine readable data back then (the first use was for the 1890 US Census), so, the practices of data mining weren’t possible. The entire purpose of sampling, and sample statistics, was precisely because no machines could be used to quantify the entire population against some policy question. You try calculating Chi Square on a very large dataset without a calculator or a spreadsheet! Indeed, sampling continues to be used to this day as a cost[…]

Have you heard about Data Market? It is one of the largest (and free) curated repository of public data. Benefits: It has internal search that doesn’t suck, so you can find what you’re looking for and get out. It offers the ability to preview the data in tables and charts before you export. It offers the ability to export in popular formats. It’s freemium. (API and LIVE data have a cost). Why am I excited about this? These data sets are very clean, and some of the data has direct uses for analysts in their social-professional lives. They’re there, and you should register and check them out.

The goal of a forecast is to make an accurate prediction about the future state of a system based on the best available evidence. The goal of target setting is to make a statement about a desired future state – with or without a forecast. Targets are political artifacts. You can read all about such dynamics in public policy here. Forecasts, ideally, are scientific artifacts. The interplay between forecasts and targets is particularly interesting. Those who produce sophisticated forecasts should understand that the motivation of those probing models is to assess whether or not a future state is possible, or, in certain situations, just how probable a given scenario could be. Don’t become trapped into the mindset that a trend[…]

There are at least five types of error related to analytics. These are instrumentation error, algorithm error, transposition error, statistical error, interpretation error. 1. Instrumentation Error When the instrument is measuring a phenomenon incorrectly. This is not to be confused with a human mistaking what an instrument really actually measures. Rather, this is when the instrument itself is only recording half of something. Or not measuring something at all. It’s akin to saying that thermometer is broken. Instrumentation has varying degrees of accuracy. For instance, the unique http cookie is subjected to fault as a result of a deteriorating cookie retention curve. The instrument continues to work just fine – it’s just that user behavior has changed, affecting its accuracy.[…]

An excellent analysis done by Allan Engelhardt, back in 2006 I suppose, talks about the 3/2 rule of employee productivity. The Coles notes version is that when you triple the number of employees, you cut their productivity in half. Check out the diagram below. Pretty scary right? Naturally, the story is much more complex than portrayed. Some sectors have mild slopes, like technology companies. Arguably, they’re using technology to flatten out the productivity slope. But it’s still slightly negative. Naturally, larger companies scale, so they still make more profit overall. Small companies are very good at doing many things. They become less good as they become large. And then ultimately, they stop being really, really good at anything at all.[…]