Announced to great fanfare recently, the Google News Initiative was positioned by the company as a single umbrella for all its various publisher-centric programs and efforts. It would bring together things like AMP Project, Google News Lab and more while also containing a new program aimed at combating the rise of misinformation online, partnering with various groups and organizations to both weed out “fake news” and improve digital literacy. The three pillars of the Google News Initiative as outlined in the announcement blog post are:

  • Elevate and strengthen quality journalism
  • Evolve business models to drive sustainable growth
  • Empower news organizations through technological innovation

All worthwhile goals, to be sure. Quality journalism needs to be supported, something Google is also working to do though lately the support newspapers receive from readers doesn’t impact their fate as much as how heavily they’ve been weighed down in private equity debt by owners who aren’t journalists but investors. And while media organizations were slow to adopt new technology many have sped up, including building out their own publishing platform.

The guiding principle for the Google News Initiative is essentially stated in the opening line of the blog post: “People come to Google looking for information they can trust.”

Yes. That’s true. This new overarching program seems like a good step in that direction and seems to come in response to several realities or incidents. Google has long had a contentious relationship with media organizations, who have claimed for over a decade that Google News in particular stole readers, who just read the headline and extract and never clicked through to the story, depriving them of ad revenue. Both itself and corporate sibling YouTube have come under fire for a tendency to surface not only false information but also outright conspiracy theories as “news” around crises.

Programs like this may come as close as Google comes to admitting it needs to build in not just technological systems to produce accurate results but that it has an editorial responsibility to contribute meaningfully to the public discourse. That’s something YouTube has failed to do as it looks to farm out fact-checking to Wikipedia without any apparent support for the site or its masses of unpaid volunteer contributors and editors.

The problem is that the “…looking for information they can trust” ideal is being undercut by other developments within the same company.

First: Google is working with select retailers to artificially surface results from those websites when people search for products, taking a cut of the revenue from those sales in exchange for higher rankings.

Second: Google seems to want to turn the internet into a two-tiered environment with one being given preferential positioning because they’ve implemented AMP and one with every other site.

Third: It’s testing a feature where celebrities and other notable personalities can leave short, Twitter-like posts directly within search results refuting what they believe to be misinformation.

All three – just the most recent examples – are laudable in and of themselves. It’s hard to argue with quick product availability, mobile-optimized websites and correcting rumors.

All three, though, violate what I believe to be first principles of not only Google but the web as a whole. Specifically, that the most prominent results should reflect authority bestowed upon them by the community. As soon as you open up *any* search results to being questioned for their veracity because of the vested self-interest of the gatekeeper, you allow *all* search results to be questioned for the same reason.

In other words, the fundamental trust people have put in Google search results is being undermined by Google itself because it believes it knows what’s best or has found a new way to generate revenue.

I understand why Google may have evaluated and embarked on all three programs. Instagram, Pinterest and other social networks are focusing more and more on shoppable ads, where people can buy a product with just a few clicks without ever leaving the network and it needs to compete. Plus, voice assistants and smart speakers as well as site-specific search are eating into general, web-wide search activity. New features from Facebook and Snapchat are geared directly at celebrities, allowing them to respond to and interact with fans.

The company needs – and should be allowed to – compete against those features.

It can’t, though, simultaneously work to increase the quality of discourse on one hand while operating with its own interests clearly in mind on the other. What happens when a sponsored shopping result from Target for a product conflicts with breaking news about how that product may be responsible for making people ill? How about when a story about a musician being accused of two decades of sexual harassment and abuse is countered by a search-native post from that musician claiming the accusations are baseless when they’re clearly not?

At the moment we’re (rightly) focused on the problems plaguing Facebook, which is now clearly exposed not as a social network but the world’s most sophisticated datamining operation.

A similar level of scrutiny should be applied to Google for what it’s doing to shape the public conscious. As it slowly begins to correct the oversights and missteps of the last decade and mend its relationship with news organizations it can’t also introduce mechanisms to prop up revenue or market share that work against the idea of providing accurate access to the whole of human knowledge.

Google may want to save journalism, but it may find that journalism is not a money-making enterprise.

Chris Thilk is a freelance writer and content strategist who lives in the Chicago suburbs.