There’s a story in Variety about the failure of traditional box-office tracking to both predict the summer’s big successes and to warn that some films were headed for abject failure.

The story goes to pains to identify social media, ratings aggregators and other outside influences as the biggest factors behind this shift, saying that all these outside opinions are sabotaging the carefully-constructed campaigns from the studios. With everyone in the audience listening to their Twitter and Facebook friends and checking out Rotten Tomatoes scores, the theory goes, the tracking has become unreliable at best. Movies that tracked well bombed and movies that tracked alright became international blockbusters. And it’s all because people saw their friend’s opening night review and assigned that more weight. There’s a certain amount of truth to all that.

Rotten_tomatoes_logo

There have been stories for years about how social media conversations are undermining studio campaigns, to which I’ve always said what you have isn’t a social media problem, what you have is a word-of-mouth problem since social platforms just allow for WoM to be amplified exponentially. What we’re seeing here is more systemic, though, and points to a problem with the existing models and tools.

First, there’s the issue of how the tracking is done. I don’t know the specifics of how things are currently done but if the process doesn’t include a healthy amount of social media monitoring then there’s a big hole in the methodology, enough of a hole that the results are almost guaranteed to be flawed. It should be a mix of traditional practices – you know, calling old people on their landlines – and new social monitoring and web scraping that’s all put into one blender for aggregation and analysis.

Fantastic-Four-2015-full-team

But that leads to the caveat. While you’d think monitoring software and systems would be able to predict how any given movie is going to do in any given zip code, there’s still one problem many of these packages still have trouble with: Sentiment. Too often, in my experience, results are littered with incorrect sentiment tags as neutral stories – i.e., simple Tweets of a news story’s headline with no additional commentary added – are labeled as negative. Or the system misreads the language and calls a negative Tweet “positive.” Sorting through these and either manually correcting the errors or adjusting the numbers after reviewing what’s been generated is incredibly time-consuming. In short, these packages are not yet ready to provide consistently reliable results which can be banked upon by stakeholders.

Issues with tracking aside, I’ll come back to a point I’ve made repeatedly, which is that the best way to change the conversation is to get involved in it. If it looks like sentiment is heading south, do something to bolster it beside just wave your arms and yell “No, everything is fine!” Identify the conversation starters/leaders and engage them to see how you can change their opinion, not through coercion but through a conversation. Tools like LittleBird are great for doing this kind of work.

There’s never going to be a cure for a bad product. That’s true for movies, cars, razors…anything. If it just flat-out stinks people will say so and their opinion will resonate with the people in their network. Study after study has shown that, particularly for younger age-groups, peer opinions carry more weight than marketing and advertising. But actual engagement has the potential to take something that is unduly getting negative buzz and turn it into, at the very least, more of a success than it otherwise would have been.

One thought on “Hollywood Tracking is Broken

Comments are closed.