The Egg on North Face

North Face is not, by any means, the first company to try and alter their own Wikipedia entry for marketing-related purposes. They might, though, be the first to so publicly tout doing so and have their success presented as a successful effort to overcome the “obstacle” presented by the site’s own dictum that such efforts were strictly verboten.

There’s quite enough blame to go around here. Parties to be held accountable include:

  1. Those that pitched the campaign, regardless of whether they were junior or senior staffers at the agency involved.
  2. Those at a more senior level who approved the idea as one suitable to be presented to the client.
  3. Those at the company who approved the campaign and gave the go-ahead for implementation.
  4. Those at Ad Age who presented the campaign as a cheeky “subversion” of the rules, like we should reward those involved for willfully defiling a public space and see them as innovative instead of the vandals they are.

The situation we find ourselves in is one where professional ethics in the marketing industry seem to be all but completely abandoned. It’s not just that times have changed and norms have been adjusted in the last 20 years. Influencers don’t bother disclosing affiliate posts, celebrities don’t bother disclosing their paid posts and a shocking number of marketers aren’t even aware there are guidelines around influencer marketing.

15 years ago the people running online marketing programs were the same ones who had come up with the rise of the industry, many of whom were intimately involved in the creation of professional guidelines and best practices. As that generation was phased out they tried to pass on what they’d learned – sometimes the hard way – to those behind them. At some point it became a game of telephone, though, to the point that some current marketing professionals don’t even try. And the industry press, afraid of losing access, isn’t pressing on these points.

All of that is in service to the belief that the best way to run a successful campaign or program is to be “edgy.” Throw norms out the window, because all they do is get in the way of making money. If one agency says they won’t execute a program because it violates some standard or guideline, all a company has to do is find one that suffers from no such moral burden, at which point the original agency has lost the business and will have to report the reason why to their holding company. Failure is not an option because there’s always someone willing to cross the line and leave before having to face any consequences.

You see this all the time, particularly with agencies and consultancies built around a single personal brand. They will come up with the most outrageous idea and convince someone to execute it because they’re not in the business of having consistent, repeat clients. Instead they parachute in to a project, rack up scores of billable hours developing a half-baked idea that’s then dropped on another party’s lap and have moved on before the grenade explodes in someone’s hands.

The reputations of everyone involved remain intact because hey, at least there was some press coverage of the campaign, right? It may not be a Cannes Lion but it certainly broke through the media clutter and got some attention. So long as no one’s stock price was harmed, the only parties held responsible might be the junior staffer tasked with copy-editing the campaign.

I’m painting with broad brushes here and there are certainly ethical people operating at all levels of the marketing and advertising industry. Each time a story like this emerges, though, it’s hard to think those people aren’t becoming fewer and farther between while those free from any consideration beyond what will make the biggest impact proliferate.

Truth As Someone Else’s Problem

Over the last few weeks YouTube has become the new favorite target of media and technology industry commentators and watchdogs due to its tendency to prioritize conspiracy videos in its recommendations. More broadly, because YouTube, like other social networks, is geared around reinforcing known behaviors, one action down this path is likely to result in a steady stream of prompts that lead you to more and more just like it. If you watch one clip from “Parks & Recreation” it’s going to immediately assume you want more and more. Same goes for watching one video about how the Parkland, FL school shooting was a Deep State false flag funded by George Soros and the Clinton Foundation.

In fact that’s exactly what happened. Even as the tragedy was just barely winding down it was coming under fire because a video accusing one Parkland student of being a paid actor became the top video related to the incident. This is far from the first time this has happened, as the story points out, and in each case the company in question – whether YouTube, Facebook, Google or someone else – pledges to look into how this could have happened but nothing ever seems to come of it. It’s a virtual lock that the next time there’s something terrible that happens we’ll be having the same conversation.

YouTube must have felt additional pressure, likely for the same reasons lawmakers have been publicly making statements regarding changes to gun law, because the company did announce one new feature: It would add links to Wikipedia entries to the videos of disputed conspiracy theories, hoping people will click those links and find out the truth instead of being easily swayed or having their hunches reinforced by the videos themselves. That move was essentially in line with what Facebook has done within the last year, when it worked with sites like Politifact and Snopes to add fact-checking to stories readers had marked as “disputed.”

Both, though, are examples of a company completely abdicating its responsibility to a community it purports to serve.

The announcement was, it was quickly revealed, made without any coordination with the Wikimedia Foundation. Nor did it come with any sort of commitment to support that group or contribute resources.

No, YouTube – which is part of Alphabet, which also owns Google – wanted to outsource the heavy lifting to the volunteer editors and contributors who keep Wikipedia running.

For various reasons (none of them convincing) Alphabet does not break out reporting on YouTube’s advertising revenue. But Alphabet has a whole recorded revenue of $25.9 billion in the fourth quarter of 2017.

Yet with all that money, it is unwilling or unable to either A) Do anything more substantive about fighting the spread of false information itself or B) Contribute anything to the organization it suddenly and unexpectedly assumes will do its content policing for it. The same can be said about Facebook and it’s consistent with trending stories and news.

It’s great that YouTube is limiting the hours its human moderators are allowed to spend viewing and removing disturbing content, often involving child pornography, abject violence and more. No one should have to deal with that, especially not for hours on end. But you can’t simultaneously ask people to be a final check when you also keep insisting that technology can solve all these problems. That’s obviously not the case.

Silicon Valley’s inability to learn from its own mistakes or admit any responsibility for what appears on its platforms is as inexplicable as it is predictable. The same problems keep coming up. Facebook just yesterday had to deal with a problem where searches were autocompleting with suggestions for child porn. If that sounds familiar it’s because YouTube had exactly the same issue just five months ago.

There are systemic problems baked into the technology that’s being produced and which we all have hailed as innovative and world-changing for the last decade. Unfortunately the companies producing that tech seem completely uninterested in solving those problems, instead counting on low-paid or volunteer fact-checkers to do the heavy lifting, sometimes without their knowledge or consent. All this while the sources of misinformation rush in to fill the vacuum left by legitimate professional news organizations that have been decimated in part because those same tech companies are eating up all the ad spending.

4/5/18 Update: As it faces increased scrutiny over both data usage and the spread of disinformation on its site, Facebook has announced a slew of new initiatives to “give context” to news. One of those includes linking to the Wikipedia entries for news organizations, though like YouTube there’s been no additional comment around how the company might support the largely volunteer Wikipedia organization and mission.

Chris Thilk is a freelance writer and content strategist who lives in the Chicago suburbs.

Covering Breaking News on Wikipedia

Brian Keegan looks at how breaking news is covered on Wikipedia as a large number of people try to create a real-time record of what’s happening followed by a core group of editors who clean things up and add additional details:

Wikipedia articles certainly do not break the news of the events themselves, but the first edits to these article happen within two to three hours of the event itself unfolding. However, once created these articles attract many editors and changes as well as grow extremely rapidly.

This is the most interesting chart, I think, which shows how these breaking stories are collaborated on in the first hour after the story is created.

network-lt4h

There’s been a lot of talk in recent months about how breaking news is covered by social media. Much of the attention recently has turned to Reddit, which takes a much different approach to things than Wikipedia does. But what’s more notable than any individual case is how people are not just turning to these community-powered outlets for the latest news but that so many people are contributing themselves, adding links, insights and more to the general well of knowledge around an unfolding event.