You can’t blame Scarlett Johansson for feeling the fight against deepfake porn featuring her face isn’t worth the effort. She has faith most people know it isn’t actually her in those videos and knows that attempts at legal action would likely be as effective as shooting arrows at a cloud.

If you’re not familiar with what deepfakes are, you owe it to yourself to become educated. Put simply, machine learning is used to alter a video in a way that makes it appear completely real, often featuring someone saying something they’ve never said or appearing somewhere they’ve never been. Think about Robert Zemeckis having Lyndon B. Johnson interact with Forrest Gump, but a thousand times more convincing and easily alterable.

Such videos have been heralded as a threat to the integrity of the U.S. election process as well as to governments around the world. It’s one thing, after all, to share an altered image purporting to show Hillary Clinton holding up a sign saying “I DID BNGAZI 4 DA LULZ” and another to share a video where she seems to be offering her confession to personally killing American ambassadors.

The threat presented by deepfakes to the marketing industry is slightly less important but no less real.

Imagine you’re a makeup company who has recently signed a new celebrity spokesperson for a campaign that’s about to launch. Everything is running on all eight cylinders and the campaign’s debut is a week away. Suddenly you get an alert there’s a video featuring that celebrity that’s climbing the YouTube charts that has them endorsing your competitor at a recent publicity event. Or even worse, they’re shown to be encouraging people to attack others outside of churches and other institutions.

Make the hypothetical even more personal for the company and imagine a video of the CEO openly espousing some sort of terrible position on a social issue, or “admitting” that the product being sold is inferior or harmful.

What comes next? You know this isn’t real, but 3,000,000 views inside of 24 hours is not something you can just ignore. A response has to be made, though you know that efforts to combat misinformation often fall flat, achieving only a fraction of the reach and impact of the original, based largely on how such information is often propagated by bots, trolls and other bad faith actors who aim solely to sow the seeds of discord.

In many ways this is a case of Crisis Comms, where you need to be prepared for the worst possible eventuality. But it goes even deeper because this isn’t just about something that’s gone wrong or some campaign that’s been received poorly, or even an executive who experienced a gaffe while a mic was open.

Based on the fact I’ve seen zero conversations about this and have been unable to find anything about it through search, my sense is that marketers have not yet caught on to the dangers deepfakes pose to the brands they’re responsible for. A belief that such videos will be confined to the political realm is naive, though, as they will inevitably leak over just as every other harmful technology has.

There are tactics marketing professionals will have to engage in to minimize the damage these videos will do, including stepping up monitoring and developing specific messaging to counter what’s published. More than anything, they will have to convince the audience of consumers and press that anything that can’t be verified as authentic isn’t to be trusted.

That’s going to take some doing and still won’t be enough to undo all the damage that’s inflicted to corporate reputations. Marketers need to be prepared, or they’re doing everyone a disservice.