Metacritic’s secret review site rankings revealed (Maybe)
Adams Greenwood-Ericksen of Full Sail University revealed today at the Game Developers Conference how Metacritic weighs critics and sites when calculating overall scores… maybe. Greenwood-Ericksen conducted his own research into the matter, and you can see how he believes publications stack up in the Gamasutra article.
“Greenwood-Ericksen stated [he and his students] wanted to carry out the research as Metacritic scores are 'very important to a lot of people' and pointed out that, when publishers withhold financial bonuses when a game doesn't reach its Metacritic target, livelihoods are tied up in the site's work,” Gamasutra reported.
Metacritic was swift to defend, calling Greenwood-Ericksen's data “wildly, wholly inaccurate” on their Facebook page, before detailing several areas they claim were misrepresented. From the post on Facebook:
- We use far fewer tiers than listed in the article.
- The disparity between tiers listed in the article is far more extreme than what we actually use on Metacritic. For example, they suggest that the highest-weighted publications have their scores counted six times as much as the lowest-weighted publications in our Metascore formula. That isn't anywhere close to reality; our publication weights are much closer together and have much less of an impact on the score calculation.
- Last but definitely not least: Our placement of publications in each tier differs from what is displayed in the article. The article overvalues some publications and undervalues others (while ignoring others altogether), sometimes comically so. (In addition, our weights are periodically adjusted as needed if, over time, a publication demonstrates an increase or decrease in overall quality.)
The truth is there is not enough information out there to believe one side or the other. However, an accurate, public list could be a potential embarrassment for Metacritic, who probably doesn't want people to know which sites get priority in the “whose opinion is worth most” hierarchy.
Still, this is part of the frustrating thing about Metacritic. The scores it gives games are incredibly powerful in the industry, and can impact everything from stock prices to bonuses given to developers. The problem is that we have no idea how those scores are tabulated, or if the methodology is fair. It's Metacritic's right to keep their secret sauce proprietary, but it's also our right to be skeptical of its methodology until we know what's going on.