The input and collaboration of many creates value in most cases, but probably not in all. One of the best examples of the concept of “crowdsourcing” is Wikipedia, but there are some troubling signals that have come out of this social experiment:
More than 49,000 editors left Wikipedia’s English-language edition during the first three months of 2009, compared with only 4,900 for the same quarter a year earlier, according to the Journal, quoting Spanish researcher Felipe Ortega, who analyzes Wikipedia’s online data. Though the service still boasts about 3 million active contributors, volunteers are leaving more rapidly than new ones are joining, the Journal said.
I fancy myself as being relatively well informed, and have joined, as a volunteer a few months ago, but upon reflection saw nothing particularly valuable to contribute to the existing entries. How many in a “crowd” makes “crowdsourcing” meaningful?
Wikipedia co-founder Jimmy Wales discussed the site in an interview with Silicon.com earlier this month. With 13 million articles now written and edited by volunteers, Wales sees conflict among multiple contributors as the exception.
“We really tend to use less inflammatory words–try to stick to basic facts and so on. And that’s come about over time. You have people come together [on Wikipedia] with different viewpoints but in general they tend to be trying to work in good faith to collaborate and compromise with other people.”
Wales also pointed out that most articles are written by a small number of people.
“One of the things that’s important to know about Wikipedia is that the entries that are edited by hundreds of people are really anomalies,” he told Silicon.com.
So at what point does the wisdom of the crowd turn into madness of the mob? I am not envious of Wales as he seems to manage as explosive a process as nuclear fusion, when it comes to the emotions and egos involved, but I am very grateful that he does, as many studies have shown that Wikipedia’s authority is every bit as high, if not higher, than one of traditional encyclopedias. The accuracy is the context in which authority of an encyclopedia is judged.
The debate about the comparative accuracy of Market Research methods (online vs telephone) made me think about the context in which this debate is formed.
The debate over the accuracy–and quality–of survey research conducted online is flaring at the moment, at least partly in response to a paper by Yeager, Krosnick, Chang, Javitz. Levendusky, Simpson and Wang: “Comparing the accuracy of RDD telephone surveys and Internet surveys conducted with probability and non-probability samples.”
In my opinion the methods employed to conduct the research are secondary to the findings, the researcher attempts to discover. This opinion usually draws very heated arguments from purists who are concerned that “biases” cannot be avoided if the research is “tainted” by pre-conceived expectations. I totally agree – biases cannot be avoided, or even tried to. Without biases the results of research is meaningless and it is a lot more useful to introduce the power of the context and some structure into the process.
Meaningful, representative and actionable results of market research are more important than its marginal accuracy.