top of page

Deepfakes, the “truth” and how it should affect your media plan.




The sheer volume and ubiquity of misinformation makes it feel like a fairly recent phenomenon, but the ‘fake news’ game has actually been around since the beginning of mankind.

 

Human beings figured out very quickly that spreading lies was very effective way to bring down an opponent and gain power and influence for themselves. In the earliest of times, it was a grind worthy of Monty Python. The early adopters of fake news didn’t have the help of big tech. They had to put in the work, trek from village to village, climb up on a big box in the middle of town square and shout their material live and in person to whomever would listen.

 

With each year and every technical innovation, from the printing press to radio, tv to social media, the creation and distribution of misinformation got exponentially easier and faster.

 

And, as we are seeing, the power provided by AI is taking this to a completely new level.

 

In what already feels like the longest and most soul sucking political campaign in the history of US political campaigns (and that’s saying something), the AI deepfakes started early.  Democrats in the New Hampshire primary were told by an AI generated robo-call from President Biden to “save their vote for the November election.” It’s just the beginning.

 

This sort of scam, something that is deliberately created and intentionally disseminated, is known as ‘direct’ misinformation. But it is accidental or ‘indirect’ misinformation that is the greater risk to your brand.

 

Why?

 

Indirect misinformation is, in simple terms, the unintentional spreading of biased or misleading information, suggesting false conclusions without actually saying them, fueled by the algorithms used by big tech to keep us glued to their platforms as long as possible.

 

The more sensational the post, the more their algorithms will amplify it to grow engagement. Truth is not part of the equation.


Algorithms value engagement over accuracy. Period.

 

Let’s say someone with an agenda posts something juicy, but vague and unproven, about the politician Jane Doe. The post is provocative, so it gets a lot of likes, shares, and comments. The algorithm notices this and sees an opportunity to get more engagement, so it elevates the post in more users' feeds, spreading the misinformation more widely. The algorithm has no intent to misinform, just to elevate based on user interaction. While not intentional, the platform has nevertheless promoted false information.

 

(Big Tech could also use algorithms to measure and value truth on their platforms through fact-checking systems, using AI to analyze sources, or adjusting their algorithms to prioritize content that is verified. But they don’t. That’s a whole other discussion.)

 

Meanwhile, you are marketing your business, and digital is rightly going to make up part of your overall media plan. You have no desire to enmesh your brand into the divisive world of politics.

 

Through no fault of your own, a current or potential customer finds your ad proudly displayed next to, or even within, the salacious and misleading post about Jane Doe, leaving them with the very reasonable impression that you are promoting or even endorsing the content.  At best, it could lead to a negative impact on your brand image. At worst, a crisis communications nightmare.  

 

Not optimal to say the least.

 

Further exasperating the lack of control is the dominance of Programmatic Advertising (roughly 74% of all digital ad spend in North America in 2023) which uses the same algorithms to purchase your digital ads.

 

While your campaign will be targeted based on criteria you determine, the process itself is automated and ads are auctioned off in milliseconds, removing the chance for any human oversight. Since the algorithms you rely on to maximize your reach are focused on engagement, they will be drawn to the same sensational or misleading content.

 

And where there is money to be made there will be sharks.  Shortly after programmatic advertising came the Made for Advertising website (MFA) which was created for the sole purpose of poaching your ad dollars.

 

MFA’s lure people to their sites using tactics such as clickbait headlines, misleading information, or the promise of exclusive content for the sole purpose of inundating their visitors with random ad content.


Forbes wrote a great article about MFA’s in May of 2023 titled The Rise Of ‘Made For Advertising’ Sites: How Responsible Brands Can Take Action. Ironically 1 year later they were the subject of a Washington Post report, alleging they had for years been running paid ads intended for Forbes.com on a copycat site, which were loaded with 200+ ads and promoted through clickbait-style paid ads. It's also been reported advertisements from notable brands such as McDonald’s, Disney, Microsoft, JPMorgan Chase, American Express, New York Times, and Wall Street Journal were displayed on this alternative Forbes website.

 

Lou Paskalis, Chief Strategy Officer of news media accountability company Ad Fontes Media wrote, “Clickbait websites could be siphoning $17 billion a year… and brands don’t even know they’re paying for ads there”.

 

Programmatic advertising has its benefits (efficiency, targeting, real-time measurement), but you do run the very real (and growing) risk of having your brand displayed in settings like these.

 

Here are a few ways to minimize your risk.

 

1.      Choose safe sites. Buy ad space from reputable publishers and platforms known for quality content. Much of the prized 25-54 demo has been shown to combine their traditional and digital media, often turning to the digital platforms of established print and broadcast news outlets.

 

2.      Make lists. Create block lists of sites known for misinformation and preferred lists for safe, credible sites and input these into your campaign settings.

 

3.      Target smartly. You can use contextual and keyword targeting to target ads based on the content of the webpage rather than user behaviour.

 

4.      Monitor your campaign! Regularly look at where your ads are going and adjust as needed.

 

5.      Use safety tools. There are companies (Comscore, DoubleVerify etc.) that specialize in safety and suitability of ad placements.

 

 

The bottom line:

 

There is nothing new about misinformation or the type of people that intentionally use it to their benefit. And while every technological advance has made it easier to create and distribute, AI is taking it to another level.


Digital advertising is an incredible tool, offering exceptional targeting capability, scale, real time analytics and precision. But there are very real brand safety risks that you need to be aware of and minimize. The good news, it can be done.

 

 

bottom of page