The once-niche rating aggregator has evolved into one of the most trusted and cited websites on the internet- easily the
The quandary here is twofold.
- First, whether or not a film’s “Tomatometer” rating accurately relays the quality of that film.
- Second, whether a film’s rating makes or breaks it at the box office.
The first is an easy no. Although the mass consensus approach purports fairness, the number that’s bandied around isn’t an indicator of how “good” the movie actually is. Let’s look at some recent examples:
Two big blockbuster movies from this season with reasonably high scores. The first, DC‘s Wonder Woman, has an exceptionally high score of 92%. Does that really mean that the movie is only 8% flawed? That it is a masterpiece in the highest percentile?
No, it simply means that 92% of counted reviews (25/285) deemed the movie to be “watchable”. Pay attention to the much smaller number marked “Average Rating”. It’s a much humbler 7.5/10- not too far off from Guardians of the Galaxy Vol. 2‘s 7.1/10. Which makes total sense! Both are well-made, efficiently assembled popcorn flicks that entertain as they’re meant to. Neither are cinematic gold, but their official ratings grant them far more esteem. Ever wonder why it is that even the best rated summer blockbusters (the Planet of the Apes reboots, Marvel movies) are relegated to winning strictly technical awards at the Oscars/Golden Globes?
It’s not because of some nefarious conspiracy to lock out fun from the shows. It’s because critics saw them, a majority judged them as passable summer fun, and then later in the year gave higher scores to prestige pics. Simple math. In theory, a movie could score 100% and have a score of 6/10. It would only mean that all critics who reviewed the film thought it wasn’t bad, not that it was a perfect movie (or even a great movie). Let’s not forget that this happened once:
Those eight reviewers didn’t think this was a perfect movie, they just thought it wasn’t awful. Note: this score of course sank like a stone once more reviews came in.
Quality is recognized– just look at two Oscar contenders from this past season:
9/10 and 8.6/10- now there are some real scores that imply real quality. Here is where the majority of critics actually said “this is a GREAT movie”. And both these movies were rewarded as such with many trophies.
This sort of problem is widespread in the site’s TV section, where, understandably, there are far fewer reviewers per each program. Looking at the ratings you’d think that everything airing on television right now is immaculate.
7.8/10 is not immaculate. It’s okay, but far from the perfection that 100% implies. The point rests: if the Average Rating were taken into account over the Consensus Rating, far fewer movies and shows would shine as brightly as they do now, especially blockbusters. That big bold rating everyone likes to reference for each movie isn’t as conclusive as it seems.
The second point: do these scores affect how a movie does financially? The answer here is a little messier, and examples can be brought in from either side. Sure, it’s a badge of distinction to have a “Certified Fresh” label, but it’s no guarantee that you’re going to rake in the dollars. It’s still largely about the brand. Branding and consumer interest in said branding is still key.
Consider that just last year we had Warner Bros’ Batman v Superman and Suicide Squad. Both tanked critically (an abysmal 27% and 25%, respectively) but did phenomenally at the box office. Conversely, the Ghostbusters reboot had a middling 70-something score and tanked hard, to the point where all franchise plans were scrapped. This weekend sees these films opening:
Let’s see who prevails- the critically acclaimed Edgar Wright helmed vehicle Baby Driver, or the 60% movie featuring everyone’s favorite yellow meme lords.
Proponents of the “Rotten Tomatoes Rules” theory have plenty of meat in the failure of many critical bombs within the past few months. Transformers, King Arthur, Baywatch, Pirates of the Caribbean, and The Mummy all crashed and burned, but can their dismal performance be strictly attributed to the Almighty Tomato? Or maybe it’s a combination of franchise fatigue and lack of interest in those properties? What was the hook for King Arthur? Why was Sir Anthony Hopkins in Transformers? Why was there another Pirates movie? All these unnecessary things clogging up the screen led to a glut of unwatchable material. People just didn’t care about these things anymore and the producers didn’t give them any new reason to care. They were bad movies and organic word of mouth stopped them in their tracks.
It’s no secret that positive reviews filter their way into conversations, but it’s an extension of quality rather than the sole deciding factor. Sometimes it’s just really good marketing that brings people out. Suicide Squad was a DC property, but it was based on lesser known characters. However it had the benefit of about two years of “Jared Leto is f***ed up” stories that ingrained it in public consciousness and brought the crowds out.
The opposite can happen too, sometimes at the last minute. Social media darling A Dog’s Purpose was on track to become the next Marley & Me, a weeper like no other. Then came the wrench in the gears; a video showing a dog being mistreated on the set of the film stopped audiences cold. 2016’s capstone sci-fi epic Passengers didn’t get very healthy reviews, but it was the endless thinkpieces about the film’s problematic treatment of Jennifer Lawrence that prevented the movie from becoming the next Gravity.
Organic buzz is unpredictable, and almost unquantifiable. Strategies to build it among potential viewers flop all the time. There are outliers and surprise hits all the time. There are also surprise bombs. It’s a tricky biz!
No doubt that critical evaluation can sway public opinion a few notches one way or another. But to say that the Tomatometer decides whether a movie lives or dies at the box office is to ignore several industries’ worth of other factors. It’s the brand and the buzz that still bring in the bucks.