I’ve certainly had experiences similar to what’s described in this much-shared story from a few months ago about the way the algorithms that push suggestions and notifications at us all day long are never exactly right but are also never exactly wrong. Spotify, for instance, continues to insist I’m going to enjoy Hall & Oates based on my other listening habits but, while I’ll certainly sing along when a song comes on the radio I don’t have any desire to explore the rest of their catalog.

The recommendations we see on Spotify, Pandora, Facebook, Twitter and every other modern media platform are based on tracking and categorizing. Each one either on its own or in conjunction with some subset of other players, collects all our activity and spits out what its algorithm believes is a solid prompt to put in front of us to encourage even more activity. They want us to keep engaging, locking us ever tighter into their systems, the better to sell ads that themselves are delivered based on our stated interests and previous activity.

What’s disturbing, though not necessarily surprising, is that nearly three-quarters of American internet users, according to Pew Research, don’t know that Facebook in particular is maintaining a list of their interests and activities in order to better target them with “relevant” ads. It’s not just the actions *on* Facebook that’s captured but what you do on a startling number of other websites as well. The reason it’s not shocking people don’t understand the depth of the metrics tracked is that most people don’t even know Facebook’s News Feed is filtered and not a firehose of updates.

One passage from Pew’s recent study jumped out at me:

A majority of users (59%) say these categories reflect their real-life interests, while 27% say they are not very or not at all accurate in describing them.

This gets to the heart of the problem. While overly-invasive tracking is certainly an issue that needs to be addressed on many levels, the whole system is based on a faulty premise; That what one does online is both wholly representative of who they are and that nothing ever changes.

Algorithms and AI don’t do nuance, so those systems aren’t going to know that while you might prefer Coffee Brand X, you also enjoy mixing it up with Coffee Brand Y once a week just because. They just know you’ve liked Coffee Brand X’s profile or page and so will serve you ads and recommendations based on that as if that’s the only possibly opinion you could have. Or they don’t understand that while you did like that page eight years ago, you’ve since learned about the horrible way they treat their workers and no longer drink their coffee.

The assumption that everything is locked in stone is part of the reason these platforms don’t offer any sort of reset button, an idea proposed a while ago by Kurt Wagner at Recode. Everyone’s interest and friends lists get messy over time, resulting in plenty of instances where you ask yourself “Wait, who is this person” when you see an update from them in your feed or come across them on your list of connections. A quick and easy way to wipe the slate clean and start fresh would make management so much easier, but it would also impact all that data that’s being accumulated on you, which would negatively impact how much they can charge for sponsored posts or placements. It’s always a good rule of thumb to remember that if an idea sounds like it would be good for users, it’s likely to be bad for the company and therefore won’t be implemented.

One of the primary reasons given by myself and others for why RSS never caught on with the masses is that education was difficult. Most people never quite understood how it worked or how they could use it and so it was shunted to the side as a niche technology, especially when social media began its ascendence.

Social media, though, has thrived *because* people have never been fully educated on how it worked. Every “#DeleteFacebook” uprising has been spurred by reports of how it has invaded and abused the privacy of those who use it, but none have ever amounted to much because that all sounds really complicated and the societal costs of removing one’s self from the network are too high.

Help isn’t coming from on high, either. The repeated appearances by Mark Zuckerberg and other tech media executives have shown time and again that lawmakers don’t understand how these systems work, or even how to ask the right questions about them. Nor do many in the media, which leads to situations where both parties are aghast in disbelief when someone like Alexandria Ocasio-Cortez correctly points out how algorithms and AI are filled with bias problems.

This isn’t a situation that’s going to get better until there are more like Ocasio-Cortez who are both informed and in a position to effect meaningful change. All the campaigns to get people to ditch Facebook or stories of Spotify leaking personal data won’t amount to much because the public doesn’t know what that means for them. It’s the same thing we see with the climate change debate. Only when they’re held accountable in real meaningful ways will we see the actions of these companies improve.