I started to use and contribute to customer reviews almost 10 years ago when Amazon introduced them in their book store. Even before that time, I participated in user discussions about this-or-that product on various user groups, but the absence of structure of these venues limited their value severely.
The October 5, 2009 article in the Wall Street Journal “On the Internet, Everyone’s a Critic But They’re Not Very Critical” by Geoffrey A. Fowler and Joseph De Avila (sorry, WSJ will ask you to subscribe) reopens an old, by never-ending discussion on the authenticity of computer reviews. There are actually two claims that need to be addressed in this framework:
1. Retail sites, that sport customer reviews, allegedly manipulate/mitigate/obstruct the visibility of the true reputations of products they sell.
2. The overall value of the reviews is questionable for a consumer who looks to reduce the uncertainty of their purchase.
I have very little personal experience with any evidence of breaches in the integrity of the reviews management process over the years. However, I am well aware of private and public recounts of such practices and I did read Bazaarvoice claims that they employ “mitigators” for the reviews they manage. It is not clear from the published writings, what exactly their role and responsibility in the matter was, but here are some examples of such claims:
Mechanist.tm writes “I recently purchased a NAS from a well-known online computer component shop. I have purchased several items from the website and have never had much trouble before. That was until I realized what I had bought was a terrible NAS. All the reviews on the site from users seemed very good. After a little research, it became clear that the product in question was indeed terrible. After finding the product pretty much useless for its intended purpose, I proceeded to write a review for it on the website to inform other would-be buyers. After about a week, I noticed that the review never made it up there, so I wrote another one just in case. After several attempts to leave a negative review for the product, I realized that the website was screening reviews and only posting the ones that made the products look good. All the reviews on the website are positive; I’ve only found one at less than 3 out of 5 stars. Is this legal? Ethically speaking, it’s wrong, and it’s intentionally misleading to the customer. Is there a good place to report behavior like this? How common is this among online retailers who provide user reviews?”
Various claims of Yelp’s handling of their restaurant reviews were widely publicized in the press and the blogosphere and the trustworthiness of mixing reviews with an advertising sponsorship business model was questioned.
I am aware of quite a few, well documented, instances of attempts to manipulate public trust by overzealous, not too smart and surely unethical marketers. You can find some references to those in my Evolution of BPR blog, WSJ Blog, as well as many other places. This practice is illegal and the article in the New York Times describes the precedent-setting case. Incredible transparency is supported by the Internet exposed by less egregious attempts by others to compromise public trust using the most popular customer review sites and succeed at eroding the reputation of the companies involved. However, we are citing examples of a dozen-or-so known examples, that involve hundred or hundreds of actual reviews, while tens-of-millions of customer generated reviews have been published over the years. Is it reasonable to dismiss the public service of a multitude of socially- minded individuals for the sins of a few corrupt or misguided?
Lastly, I would like to address the argument of a bias that is at the opening of the WSJ article that inspired this writing:
“The Web can be a mean-spirited place. But when it comes to online reviews, the Internet is a village where the books are strong, YouTube clips are good-looking and the dog food is above average.
One of the Web’s little secrets is that when consumers write online reviews, they tend to leave positive ratings: The average grade for things online is about 4.3 stars out of five.”
I assert that this average grade is meaningless and indicates nothing but a poor adoption of a scoring methodology. As a co-founder of a start-up focused on the extraction of Product Reputation analytics from Customer Reviews, I often hear diametrically opposite, and emotionally-charged opinions that reviews are too negative or too positive. These opinions are based on their owner’s beliefs and experiences, which are anecdotal, and based on the excessively ambiguous 5 stars usage by social media sites mentioned in this article. The Leikert scale (5 stars) was invented in the beginning of the last century for market research. Its use and interpretation are bound by rigorous methodology that is completely ignored in customer review entry processes.
When you review a product and give it 5 stars, does it mean you are satisfied or delighted? Did your experience match your expectations or exceed them? When you found the experience with a product unacceptable and end up returning it, why are you “forced” to rate it with at least 1 star (A rating of 0 stars is usually not available)? Does it mean that you are 20% satisfied with the product?
Since there is no rhyme or consistent reason in the collection of these reviews, the only way you can gain practically-useful application of this data is reading and analyzing each and every one. That could be very time consuming and that is why we process this data with opinion mining/sentiment mining software to produce accurate and consistent scores. As a result, I assure you that the averages are very different and a lot more useful.