Attention conservation notice: Review and notes from a book discussing an academic topic that will likely only interest you insofar as it generalizes to other topics, unless you are both a huge stats and film nerd.
I'm fascinated by movie ratings and what they tell us about: 1) the best ways to use rigorous methods to study the quality of a subjective output, 2) how variable people's assessment of quality are, and 3) how people conceptualize their own opinion in the context of everyone else's. Dean Simonton is a giant in the psychology of creativity, and I loved his book Creativity in Science. So, as soon as I saw this one, I clicked "buy it now" on its Amazon page.
My typical gripe against academic investigations of movie ratings is that they discount imdb.com, a huge resource with millions of data points, segregated by age, gender, geographical location, on an incredibly rich array of movies. So, soon after buying the book, I searched in the Kindle app for "imdb" and found very few results. This predisposed me to disliking it.
A few of my other gripes:
1) It takes awhile to get used to Simonton's academic writing style.
2) The book takes few risks stylistically. Each chapter feels like it could be its own separate article. Thus, he does not take full advantage of the long-form medium.
3) When he discussed a few of the measures (such as the correlation between different award shows), I felt that there was some issues with his account of the causality. Surely there is some, non-negligible probability that people take the ratings of others' into account when they make their own judgments. He mentions this sometimes, but not enough for me, and ideally he'd come up with some creative way to try to get around it.
4) Finally, there are a few typos. I actually like seeing typos, because it makes me think that I am learning from a more niche source that others are less likely to appreciate, but YMMV.
By midway through the book, Simonton had won me back to a large extent. His analyses of his data were very well-done and he supplies tables so you can look at the regression coefficients yourself. And there are many good nuggets, such as:
- the best predictors of higher ratings are awards for better stories (e.g., best screenplay and best director), as opposed to visual or musical awards
- having individuals on the production team who play multiple roles (such as writer, cinematographer, and editor all at once) makes the film more successful, presumably due to creative freedom
- some amount of repeated collaboration over multiple films with the same individuals, but not too much, is optimal for winning awards (i.e., there is a trade-off between stimulation and stagnation)
- higher box office returns are inversely correlated with success at awards shows
- the typical film is unprofitable; "about 80% of Hollywood's entire profit can be credited to just a little over 6% of the movies"
- the curse of the superstar: "if a star is paid the expected increase in revenue associated with his or her performance in a movie then the movie will almost always lose money" (this is because revenue is so positively skewed)
- divides movies into two types: those that are extremely successful commercially, and those that are extremely successful artistically (people often use the former to subsidize the latter)
- negative critic reviews have a more detrimental impact than positive reviews have a boosting effect on box office returns
- on ratings, critics and consumers have similar tastes, although consumers' tastes are more difficult to predict, presumably because their proclivities are more diverse
- for a consumer, the most important factor for whether they will watch a movie is its genre (#2 is word of mouth)
- dramas do worse in the box office, better at the awards shows; comedies are the reverse
- PG-13 movies make the most money; some romance, but no actual nudity, is best (and lots of action but no gore)
- on average, sequels do far worse in ratings and awards than the original movies
- the greater the involvement of the author in an adapted movie, the less money it will make (they interfere more and might care more about "artistic integrity" than making money)
- directors tend to peak in their late 30s; they have more success in their late 20s than their late 50s, on average
- divides directors into two types: conceptual (innovative and imaginative; think Welles) and experimental (technical and exacting; think Hitchcock)
- conceptual directors express ideas through visual imagery and emotions, often leave behind one defining film, and decline quickly
- experimental directors emphasize more realistic themes, slowly improve their methods, and their best films often occur towards (but almost never *at*) the end of their careers
- female actors make less money than their male counterpoints, and the best picture award correlates much better with best male actor than best female actor
- awards for scores are much better predictors of a film's quality than awards for songs
All in all, this book is far from perfect, but it is likely the best full-length treatment of quantitative movie ratings available. If you are interested in the topic, and occasionally find yourself doing things like browsing the rating histograms on imdb, then this is essential reading.