This may be the design intention, but I disagree that it works.BadCosmonaut wrote:This chart:
"Criticker's recommendation system is designed to work just as well with rating data that is not "perfectly distributed".
https://www.criticker.com/forum/viewtopic.php?f=10&t=7255#p66438 wrote:Criticker algorithm:
advantage: severe and lenient voters can be compared to each other
disadvantage: since we avoid bad movies, our lowest ranked movies (even with excellent scores nominally) are treated by Criticker as "terrible"
comparing scores at face value:
advantage: the scores you attribute as a user are respected
disadvantage: severe and lenient voters cannot be compared to each other
The upshot seems to be:
The ideal system would use an algorithm that respects users votes while authentically translating different voting systems to one another. I don't have an idea yet of how to accomplish this.
When I started here, I very much liked Criticker's way of "stretching out" your scores within a 0-10 scale (old tier system) or 0-100 (new percentile system). This is certainly a very unique feature among movie ranking sites.
But the more I think about it, the more it bothers me, that our scores are not respected (e.g. movies with excellent scores are treated by Criticker as "bad" or "terrible"). Criticker pretends to know better than ourselves what our scores mean. This is a little patronizing.
Now I get that this is not done with bad intent. And I even concede that when I see users ranking a film at 70 while their mini-review says the film is terrible, I understand the need for some kind of intervention. On the other hand, there are very experienced movie lovers who meticulously ponder what scores to assign and taking that away from them just doesn't feel right either.
I don't know, what the solution is, unfortunately. But I think some kind of compromise is needed, i.e. an algorithm, which is more respectful of users' rankings.