First of all, the budget allocated to the monitoring of online sources is a small fraction of the budget allocated to traditional business and marketing intelligence projects. Companies are spending significantly more money on tools to analyze internal structured data (sales, accounting, inventory etc.) and even more troubling, sentiment and online monitoring account for a very small fraction of the budget invested in traditional market research (that is, a set of pre-defined questions where the consumer has to choose one answer and, sometimes, gets to add a few words in free text.). To clarify, companies worldwide prefer to use information that is
- expensive to gather (projects are very often in the range of hundred of thousands of dollars);
- biased (there is a lot of research proving that users do not really answer freely to these questions);
- static, or in other words, that describes the situation at a specific moment in time and that is actually compiled and reported sometime later when the situation could be different.
In any case, the point I want to make is not that traditional market research is useless. I think it has a right place in the mix of competitive intelligence initiatives any company has to undertake. But more so, that it needs to be integrated to take advantage of the wealth of information the explosion of the Internet has made available. Compared to traditional market research, online sentiment monitoring has the following advantages:
- it’s relatively inexpensive (if done with technologies);
- it’s less biased (and this bias tends to decrease as more of the masses go online);
- it provides a dynamic, real time view on the market.
This established behavior is very resistant to change. When I introduce our product, Cogito Monitor, to decision makers inside enterprises and mid-size companies, I often get the same objections. They immediately focus all their attention on finding errors and noise in the sentiment level automatically identified by the system. Even if the product has proved in many implementations to provide very high precision and that noise has no impact whatsoever on the reliability of the summary data provided (false positive instances are equally distributed among the different sentiment levels.).
I could argue that traditional business intelligence and market research projects offer probably similar results in terms of reliability and I am not saying that our product is perfect, but what I really want to question is the rationality of their objection. To what are they comparing the results obtained by Cogito Monitor? If the mistakes, as they are, are statistically irrelevant why are they resistant to use also this information, in conjunction with any other information they already have to support their decision-making process? To what are they comparing the precision of the online monitoring tool? Instead of comparing it to what they actually have today, it seems like they compare it to an ideal system or process providing 100% precision and recall. And when they resist to adopt these tools, they actually choose to sit like they are George W. Bush on 10 September 2001, and prefer to rely on data they are comfortable with but that is incomplete in describing what is actually happening in the market place, when they should instead be investing in resources able to interpret the signals, often still weak and confused, of brewing storms that are available on social media which can dramatically impact their business.
Author: Luca Scagliarini