After the Online Safety Act’s arduous multiyear passage through the UK’s lawmaking process, regulator Ofcom has published its first guidelines for how tech firms can comply with the mammoth legislation. Its proposal — part of a multiphase publication process — outlines how social media platforms, search engines, online and mobile games, and pornography sites should deal with illegal content like child sexual abuse material (CSAM), terrorism content, and fraud. 

Today’s guidelines are being released as proposals so Ofcom can gather feedback before the UK Parliament approves them toward the end of next year. Even then, the specifics will be voluntary. Tech firms can guarantee they’re obeying the law by following the guidelines to the letter, but they can take their own approach so long as they demonstrate compliance with the act’s overarching rules (and, presumably, are prepared to fight their case with Ofcom).

“What this does for the first time is to put a duty of care on tech firms”

“What this does for the first time is to put a duty of care on tech firms to have a responsibility for the safety of their users,” Ofcom’s online safety lead, Gill Whitehead, tells The Verge in an interview. “When they become aware that there is illegal content on their platform, they have got to get it down, and they also need to conduct risk assessments to understand the specific risks that those services might carry.”

The aim is to require that sites be proactive to stop the spread of illegal content and not just play whack-a-mole after the fact. It’s meant to encourage a switch from a reactive to a more proactive approach, says lawyer Claire Wiseman, who specializes in tech, media, telecoms, and data.

Ofcom estimates that around 100,000 services may fall under the wide-ranging rules, though only the largest and highest-risk platforms will have to abide by the strictest requirements. Ofcom recommends these platforms implement policies like not allowing strangers to send direct messages to children, using hash matching to detect and remove CSAM, maintaining content and search moderation teams, and offering ways for users to report harmful content. 

Large tech platforms already follow many of these practices, but Ofcom hopes to see them implemented more consistently. “We think they represent best practice of what’s out there, but it’s not necessarily applied across the board,” Whitehead says. “Some firms are applying it sporadically but not necessarily systematically, and so we think there is a great benefit for a more wholesale, widespread adoption.” 

There’s also one big outlier: the platform known as X (formerly Twitter). The UK’s efforts with the legislation long predate Elon Musk’s acquisition of Twitter, but it was passed as he fired large swaths of its trust and safety teams and presided over a loosening of moderation standards, which could put X at odds with regulators. Ofcom’s guidelines, for example, specify that users should be able to easily block users — but Musk has publicly stated his intentions to remove X’s block feature. He’s clashed with the EU over similar rules and reportedly even considered pulling out of the European market to avoid them. Whitehead declined to comment when I asked whether X had been cooperative in talks with Ofcom but said the regulator had been “broadly encouraged” by the response from tech firms generally.

“We think they represent best practice of what’s out there, but it’s not necessarily applied across the board.”

Ofcom’s regulations also cover how sites should deal with other illegal harms like content that encourages or assists suicide or serious self-harm, harassment, revenge porn and other sexual exploitation, and the supply of drugs and firearms. Search services should provide “crisis prevention information” when users enter suicide-related queries, for example, and when companies update their recommendation algorithms, they should conduct risk assessments to check that they’re not going to amplify illegal content. If users suspect that a site isn’t complying with the rules, Whitehead says there’ll be a route to complain directly to Ofcom. If a firm is found to be in breach, Ofcom can levy fines of up to £18 million (around $22 million) or 10 percent of worldwide turnover — whichever is higher. Offending sites can even be blocked in the UK.

Today’s consultation covers some of the Online Safety Act’s least contentious territory, like reducing the spread of content that was already illegal in the UK. As Ofcom releases future updates, it will have to take on touchier subjects, like content that’s legal but harmful for children, underage access to pornography, and protections for women and girls. Perhaps most controversially, it will need to interpret a section that critics have claimed could fundamentally undermine end-to-end encryption in messaging apps. 

The section in question allows Ofcom to require online platforms to use so-called “accredited technology” to detect CSAM. But WhatsApp, other encrypted messaging services, and digital rights groups say this scanning would require breaking apps’ encryption systems and invading user privacy. Whitehead says that Ofcom plans to consult on this next year, leaving its full impact on encrypted messaging uncertain.

“We’re not regulating the technology, we’re regulating the context.”

There’s another technology not emphasized in today’s consultation: artificial intelligence. But that doesn’t mean AI-generated content won’t fall under the rules. The Online Safety Act attempts to address online harms in a “technology neutral” way, Whitehead says, regardless of how they’ve been created. So AI-generated CSAM would be in scope by virtue of it being CSAM, and a deepfake used to conduct fraud would be in scope by virtue of the fraud. “We’re not regulating the technology, we’re regulating the context,” Whitehead says.

While Ofcom says it’s trying to take a collaborative, proportionate approach to the Online Safety Act, its rules could still prove onerous for sites that aren’t tech juggernauts. The Wikimedia Foundation, the nonprofit behind Wikipedia, tells The Verge that it’s proving increasingly challenging to comply with different regulatory regimes across the world, even if it supports the idea of regulation in general. “We are already struggling with our capacity to comply with the [EU’s] Digital Services Act,” the Wikimedia Foundation’s VP for global advocacy, Rebecca MacKinnon, says, pointing out that the nonprofit has just a handful of lawyers dedicated to the EU regulations compared to the legions that companies like Meta and Google can dedicate.

“We agree as a platform that we have responsibilities,” MacKinnon says, but “when you’re a nonprofit and every hour of work is zero sum, that’s problematic.” 

Ofcom’s Whitehead admits that the Online Safety Act and Digital Services Act are more “regulatory cousins” than “identical twins,” which means complying with both takes extra work. She says Ofcom is trying to make operating across different countries easier, pointing toward the regulator’s work setting up a global online safety regulator network.

Passing the Online Safety Act during a turbulent era in British politics was already difficult. But as Ofcom begins filling in its details, the real challenges may be only beginning.

Shares:

Leave a Reply

Your email address will not be published. Required fields are marked *