PR’s Fake News: Finding Meaningful Metrics 

How many people were actually at Donald Trump’s inauguration? Official estimates say a maximum of 900,000 were in attendance, 100,000 fewer than Obama’s second inauguration and close to a million fewer than his first. Certain publications were happy to pitch much lower than that and soon a stream of images flooded social media showing vast swathes of empty spaces earning journalists the reputation of being “among the most dishonest human beings on earth” according to the incoming President.

In fact, so great is the challenge of fake news that President Trump used his first full day in office to unleash a remarkably bitter attack on the news media, falsely accusing journalists of both inventing a rift between him and intelligence agencies and deliberately understating the size of his inauguration crowd. But whatever your opinions are of the man his incoming ceremony showed that the time for playing guesstimations is up, and media and PR professionals need to start listening up.

The PR industry has long wrestled with metrics and has at times become consumed by them. Having to prove the value of brand awareness and the actionable response of positive press coverage is no easy task, and in trying we have almost created our industry’s version of ‘fake news’ in statistics that try to convey a meaningful message, but often end up saying very little at all. Without getting all President Trump on the industry, we really need to look at our numbers.

Let’s start with AVEs. I mean, it’s 2017 and we are still constantly asked to supply these godforsaken metrics. These are the sort of figures provided back in the days when PR account executives would sit with a ruler and a bundle of newspapers and measure the size and space of a piece of coverage to generate an “equivalent advertising value” for that space. Today, people use it because the figures returned are often much higher than any PR budget and so they make PR people look good, but it’s fake news.

Even the more sophisticated measurements are struggling to keep up. Take SimilarWeb. They collect multiple amounts of data and apply an evolving algorithm with a scalable estimator to come up with a ‘best guess’ number of how many people view a piece of coverage. It takes into account the popularity of the publication, average click through rates and where the coverage sits on the homepage, but it still returns ludicrously over-beefed numbers that rarely correlate with the number of shares and other actionable responses. In fact, the PR numbers often converge with the top piece of news content from that day, even though anyone with an ounce of rationale would see the disparity.

The problem we are really trying to confront in the PR world is that we’re trying to assign arbitrary numbers (where bigger=better) to disparate campaigns which are all out to achieve disparate objectives. I have been sent countless invites to seminars discussing how best to calculate a measure of success for the PR industry, and on each occasion, I decline on the basis that surely a one-size-fits-all approach is part of the problem.

At 72Point we take each project by its individual merit. If it’s big numbers you are looking for then that’s what we can deliver – we landed on an average of 14 sites per story in December with an average of over half a million eyes per story. If it’s generating a social buzz, then we’re well set up for that too  – our stories achieved an average of 5,546 social shares in the same month. And if it’s generating a bit of Google juice with follow-links and keyword optimised content then let us know, because we are one of few companies in the industry that have eschewed fake metrics for real results, because that is what ultimately counts for our clients.