It has been almost universally accepted that some content, such as sexually explicit photos and videos depicting extreme violence, do not belong on social media. However, the line separating what should and should not be permitted on these platforms becomes much more blurred when attention turns to speech.
This question has grown ever more controversial and dangerously inconsistent in recent years, as seen in 2021 when Twitter chose to ban former and now President-Elect Donald Trump from their site for glorifying violence but allowed Ayatollah Khamenei to tweet for the ‘removal of the Zionist regime’ using ‘firm, armed resistance’.
The UK’s response to these unpredictable practices is the Online Safety Act (OSA). Passed in 2023, it broadens the scope for what is deemed inappropriate online content, lowering the threshold for removal to ‘harmful, but legal’ content. An expansion beyond all previous legislation, this new threshold includes “images, words and videos that are legal to create and view but have a high risk of causing physical or psychological damages or injury.”
The goal was to streamline platforms’ efforts to mitigate hate speech, but the new standard has already proven difficult to realise. The same exasperating difficulties in moderating online speech recur, and the OSA continues to give platforms too much freedom in regulating users’ speech, ultimately failing to protect them and effectively combat antisemitism.
There’s no question that levels of online antisemitism made new legislation a necessity. Alongside other forms of hate speech, online antisemitism has grown at a concerning rate. From February 2020 to 2021 there was a 41% increase in antisemitic posts; a 912% increase in antisemitic comments; and a 1,375% increase in antisemitic usernames.
Despite prohibiting hate speech in their policy details, Facebook failed to act on 89% of the antisemitic posts appearing on their site in 2021. This unfortunate pattern repeats itself across other platforms – all making similar promises for user protection that are catastrophically weak in practice. In the same year, the Center for Countering Digital Hate (CCDH) found 714 posts on Facebook, Instagram, Twitter, TikTok, and YouTube depicting antisemitic conspiracies, extremism, or abuse. Rather than act according to their own company policies, no action was taken on 84% of these posts.
This lack of accountability embodies the dangers of a self-regulating regime. Moderating speech through internal community guidelines is weak and ineffective, essentially allowing companies to conveniently escape liability for what their users post by relying on an agreement that would take the average user 250 hours to read.
This power imbalance becomes even more dangerous when platforms employ it as a shield to excuse antisemitic speech. The Online Safety Act aims to mitigate this asymmetry by improving transparency and equipping users with reporting tools to notify platforms of harmful content. It is not these measures that are the issue, but the enforcement mechanisms that make the OSA ineffective.
The OSA’s enforcement mechanisms must be strengthened to echo the importance that it places on a platform’s duty of care to its users. The OSA dictates that platforms must “identify, mitigate and manage the risks of harm”. However, the details of these required processes are unclear, meaning the issue of internal regulatory discrepancies across platforms remains unresolved.
Placing a duty of care on platforms to take reasonable steps to remove and limit user access to harmful content is an admirable goal, but with minimal guidance on how to achieve this it fails to be as effective as intended. Without clear requirements on how platforms need to improve, they retreat to existing mechanisms, which were clearly insufficient as far as protecting their users goes.
The Act’s broad definitions are not aided by weak enforcement. It has become clear that platforms do not fear the OSA’s enforcer, Ofcom. This comes as no surprise when assessing the extent of Ofcom’s power, which seems more influence than regulation. The OSA does not give Ofcom the power to tell platforms to remove content or limit its availability. The enforcement body only becomes relevant when platforms knew prohibited content was on their site and failed to act appropriately. This strategy relies solely on users to report harmful content, a sharp departure from the duty of care the Act places on platforms to take on this responsibility as active moderators. User protection should not be in the hands of the users themselves.
Ofcom’s weakness was made evident by the rapid increase in antisemitism after the October 7 attacks. Studies found that in the four days after the attacks, there was a 51-fold increase in antisemitic comments on YouTube.
Lack of proper enforcement means the OSA’s important aims are not being realised. Even with the Act enforced, platforms are still riddled with antisemitic content. In November 2023, CCDH’s study of online antisemitism found 140 antisemitic posts on X, formerly Twitter, with 85% of them still up on the site a week later. Despite its classification as “priority content that is harmful to adults”, antisemitism has remained prevalent on social media platforms thanks to the gaps between the Act’s ambitious aims and its approach to actualising them.
Some may argue that the increase in fines is a deterrent enough for platforms to implement the Online Safety Act’s requirements. Charging up to £18 million or 10% of their qualifying worldwide revenue, whichever is greater, is unfortunately not enough to guarantee these companies cooperate. Fines have proven to lack the necessary incentive for platforms in the past, as they consider their freedom in monitoring content worthy of a steep price. The strongest action a European country has taken against a platform was seen in 2019 with Germany fining Facebook €2 million for failing to meet the country’s requirements for dealing with illegal content. Compared to the company’s first-quarter revenues of that year of over €14 billion, the fine was clearly not high enough to motivate Facebook to change its policies, thus prompting the UK to file its own suit. With no additional controls, Ofcom stands little chance of holding these platforms accountable. This, combined with a reactive approach that focuses on ineffective fines, means that platforms are not incentivised to be proactive in combatting antisemitism on their sites.
If the Online Safety Act is insufficient, how should the government regulate online antisemitism?
The first step to combatting antisemitism is to allocate more direct control to Ofcom. There seems to be little point in creating an enforcement agency that is demonstrably impotent.
Though Ofcom can hold companies and its senior managers criminally liable for failing to comply with enforcement notices, the body has virtually no influence on the content these platforms share. If forced to take down content alongside a hefty fine, platforms might be willing to be more proactive rather than relying entirely on users’ reports.
Clearer definitions would also help achieve the Act’s goals. The vagueness of its requirements, for example how platforms are expected to take “reasonable steps” to remove harmful content, does not end the era of self-regulation. Granted platforms have different designs, and an entirely homogenous method might be impractical, but platforms must nonetheless have more specific requirements to hold them accountable.
Given these platforms’ track records of failing to act against online antisemitism, a stricter approach is justified.
Image: Flickr/Ted Eytan
No image changes made.
Comments