I’ve been struggling to tell whether the ads appearing in my TikTok feeds have been made with generative AI tools. As someone who spends a great deal of time scrutinizing images and videos for the usual “tells” that something was synthetically generated, some of the promotions I’ve seen have definitely sparked suspicion. For several weeks, I didn’t see any examples with the AI disclosure required by TikTok’s advertising policies, however, so I had no way of knowing for sure.
What irks me is that someone knows for sure if the content is AI-generated. They’re just not telling the rest of us. And if companies that claim to support AI-labelling initiatives actually want them to succeed, they should probably do something about that.
Take Samsung, for example. After slopping AI-generated videos across its social media channels, I started to notice ads teasing the Galaxy S26 Ultra’s privacy display feature appearing on my TikTok. Videos from what appears to be the same promotional campaign had been published to YouTube with disclosures in their collapsed descriptions that AI tools had been used to make them. By comparison, the TikTok ads gave no indication of whether AI had been used. Regular videos on Samsung’s TikTok accounts — those not actively promoted as ads — also lack AI disclosures, despite those same videos being labeled as AI-generated on YouTube.
It’s important to note that both Samsung and TikTok are members of the Content Authenticity Initiative, a group that aims to make content authenticity and transparency “scalable and accessible” by promoting the industry-wide adoption of C2PA. That means TikTok and Samsung supposedly share similar ideals regarding the labelling of AI content. If Samsung knowingly used AI to make its videos, it should have told TikTok when the ads were submitted. If TikTok was informed, it should have made sure its users were aware, per the platform’s own advertising policies.
Advertisers on TikTok are only permitted to use content “significantly” edited or generated by AI if they make that known. That can be achieved by applying TikTok’s own AI label, or by adding a disclaimer, caption, watermark, or sticker of the advertiser’s choosing, according to the video platform’s business advertising policy:
“When we say ‘significantly modified by AI,’ we mean content that has been changed by AI beyond minor tweaks or enhancements. This includes using real images or videos as source material but altering them substantially with AI, such as:
•Content that contains images, video, or audio that are completely AI-generated
•Showing the primary subject doing something they didn’t actually do, like dancing.
•Making the primary subject say something they didn’t actually say, using AI voice-cloning.”
Samsung did not respond to my requests for comment. TikTok pointed me to its AI labeling requirements for advertisers and its C2PA partnership, but declined to provide an on-record statement on why Samsung’s AI-generated ads received a pass. I’m still in the dark regarding what step of this transparency process failed.
I spotted a new development earlier this week — TikTok ads promoted by UK-based used car retailer Cazoo that I had previously encountered without a disclosure now have a message that reads “advertiser labeled as AI-generated” at the bottom, beside the “Ad” identifier. I already suspected the ads in question were likely AI-generated because they all contained bizarre visual distortions that had no rational editing explanation, such as a dentist’s drill morphing into different shapes and jumping between hands.

I can’t tell if Samsung’s ads on TikTok have undergone a similar update because it’s been several days since any were promoted to my feeds. AI transparency across Samsung’s TikTok accounts is generally a mess though — some have TikTok’s own AI label applied, others have a disclosure manually included in the video fine print, and several AI-generated examples carry no disclosure at all.
There is currently no trusted technological solution for reliably identifying AI-generated content, or even human-made content, at scale. I’ve spent plenty of time banging on about the flaws of authentication standards like C2PA Content Credentials, SynthID, and other provenance-based systems that try to inform users of how a piece of content was made — they need everyone to be on board to work effectively, and that simply isn’t happening. That’s a problem when people are struggling to know what’s real and what isn’t in this current geo-political landscape.
But that applies to online content generally, whereas advertising is a regulated industry that’s supposed to play by a different set of rules.
Many of these rules were put into place to protect consumers from being misled or outright lied to by advertisers, such as laws that prevent cosmetics companies from slapping false lashes onto models to sell their mascaras. TikTok beauty influencers like Mikayla Nogueira have found out the hard way that these rules apply to them when promoting products, and that their audiences tend to react badly to dishonest shilling tactics.
That isn’t to say that generated videos are always misleading, but concerns around advertising transparency have prompted the EU, China, and South Korea to introduce labeling requirements for AI in promotional materials. Even companies that haven’t pledged to support AI transparency initiatives could risk future fines if they don’t get their act together.
If large online platforms like TikTok and advertisers like Samsung can’t be honest with each other about AI usage in such a regulated environment, well, then anyone can advertise whatever nonsense they want. I’m happy that at least some ad-specific AI labels are starting to appear on TikTok after I directly flagged the ads to the companies involved. But this is a simple two-way system that should already be robustly implemented and enforced without needing people like me to scrutinize every ad in their feeds.






