Why existing book rating systems are useless

Fifty Shades of Grey - E.L. James

Before I launch into it, this post is intended as a critique against the rating systems as used by Amazon and ofcourse Booklikes. It's not meant as a critique against the sites themselves.

 

Did you ever read Fifty Shades of Grey? If not—and if you are undecided on doing so— you might feel compelled to visit its Amazon book page to see if it's any good.

 

And after looking you see it's 3.4 stars out of five. That's above average so while I, as a reader, probably won't find it a great book, I won't hate it either.

 

Or will I?

 

Here's the expanded rating as of September 2014 on Amazon:

 

You'll immediately notice the strange distribution of ratings. Rather than a nice bell-curve where the average vote also has the most votes, we have a distribution where, at best, 5000 people come close to that number. Less than 20% of the total and realistically closer to 10%. In fact, when looking at this distribution I can tell it's far more likely that if I were to read this book that either I'm going to love it or hate it.

 

And this is why these sort of rating systems are useless. From the rating alone I have no way of knowing which of those two possibilities it's going to be. Should I instantly buy this book or stay away as far as possible?

 

Of course, you might say, that's what the reviews are for. But even ignoring the people who write reviews in bad faith, the reviews don't help much either. Even if every single review is genuine, there is no guarantee that their opinion will match mine.

 

A few years ago, I read a book (which unfortunately I cannot remember the title of) that I put away after reading only a few chapters. I hated it, but the reason I hated it wasn't because it was a bad book, but because it was so good at describing a story I hated reading about.

Giving a book like that one or two stars would be in line with my experience of it, but at the same time it would be dishonest to the book's quality.

 

And that brings us to the heart of the problem: what does a numerical rating represent? Is it an indication of book quality or one of opinion? It's the second of course and that is a problem. Opinions vary wildly as the above example shows and thus the numbers handed out to distill that opinion to a single number doesn't help me at all.

 

Unless...

 

Unless I know who is handing out that rating. If I know of a reviewer who I know has a taste closely aligned to mine, I can actually assign value to any score they hand out. Yet within current systems there is no real support to find these reviewers. You'd have to trawl through thousands of reviews of different books to locate such a person.

 

I can understand why no book selling site bothers with such a thing, the rating system as it is is easily implemented after all. A good system that focuses on the reviewer rather than the rating they give would be far more difficult. Yet without such a system I'm stuck with what we have. A rating system that can easily be abused by sock puppet accounts, shills, and reviewers who review in bad faith. All because it focuses on the rating they give and not who they are.

 

And that is a sad thing.