Over the last few weeks YouTube has become the new favorite target of media and technology industry commentators and watchdogs due to its tendency to prioritize conspiracy videos in its recommendations. More broadly, because YouTube, like other social networks, is geared around reinforcing known behaviors, one action down this path is likely to result in a steady stream of prompts that lead you to more and more just like it. If you watch one clip from “Parks & Recreation” it’s going to immediately assume you want more and more. Same goes for watching one video about how the Parkland, FL school shooting was a Deep State false flag funded by George Soros and the Clinton Foundation.

In fact that’s exactly what happened. Even as the tragedy was just barely winding down it was coming under fire because a video accusing one Parkland student of being a paid actor became the top video related to the incident. This is far from the first time this has happened, as the story points out, and in each case the company in question – whether YouTube, Facebook, Google or someone else – pledges to look into how this could have happened but nothing ever seems to come of it. It’s a virtual lock that the next time there’s something terrible that happens we’ll be having the same conversation.

YouTube must have felt additional pressure, likely for the same reasons lawmakers have been publicly making statements regarding changes to gun law, because the company did announce one new feature: It would add links to Wikipedia entries to the videos of disputed conspiracy theories, hoping people will click those links and find out the truth instead of being easily swayed or having their hunches reinforced by the videos themselves. That move was essentially in line with what Facebook has done within the last year, when it worked with sites like Politifact and Snopes to add fact-checking to stories readers had marked as “disputed.”

Both, though, are examples of a company completely abdicating its responsibility to a community it purports to serve.

The announcement was, it was quickly revealed, made without any coordination with the Wikimedia Foundation. Nor did it come with any sort of commitment to support that group or contribute resources.

No, YouTube – which is part of Alphabet, which also owns Google – wanted to outsource the heavy lifting to the volunteer editors and contributors who keep Wikipedia running.

For various reasons (none of them convincing) Alphabet does not break out reporting on YouTube’s advertising revenue. But Alphabet has a whole recorded revenue of $25.9 billion in the fourth quarter of 2017.

Yet with all that money, it is unwilling or unable to either A) Do anything more substantive about fighting the spread of false information itself or B) Contribute anything to the organization it suddenly and unexpectedly assumes will do its content policing for it. The same can be said about Facebook and it’s consistent with trending stories and news.

It’s great that YouTube is limiting the hours its human moderators are allowed to spend viewing and removing disturbing content, often involving child pornography, abject violence and more. No one should have to deal with that, especially not for hours on end. But you can’t simultaneously ask people to be a final check when you also keep insisting that technology can solve all these problems. That’s obviously not the case.

Silicon Valley’s inability to learn from its own mistakes or admit any responsibility for what appears on its platforms is as inexplicable as it is predictable. The same problems keep coming up. Facebook just yesterday had to deal with a problem where searches were autocompleting with suggestions for child porn. If that sounds familiar it’s because YouTube had exactly the same issue just five months ago.

There are systemic problems baked into the technology that’s being produced and which we all have hailed as innovative and world-changing for the last decade. Unfortunately the companies producing that tech seem completely uninterested in solving those problems, instead counting on low-paid or volunteer fact-checkers to do the heavy lifting, sometimes without their knowledge or consent. All this while the sources of misinformation rush in to fill the vacuum left by legitimate professional news organizations that have been decimated in part because those same tech companies are eating up all the ad spending.

4/5/18 Update: As it faces increased scrutiny over both data usage and the spread of disinformation on its site, Facebook has announced a slew of new initiatives to “give context” to news. One of those includes linking to the Wikipedia entries for news organizations, though like YouTube there’s been no additional comment around how the company might support the largely volunteer Wikipedia organization and mission.

Chris Thilk is a freelance writer and content strategist who lives in the Chicago suburbs.