The Committee on Standards in Public Life made the recommendation in a new report (88-page / 900KB PDF) it has published following a review into the intimidation of parliamentary candidates. UK prime minister Theresa May commissioned the review by the independent body in July 2017.
In its report, the Committee said that Brexit would offer the UK the chance to change laws on liability for illegal online content. Currently, EU laws under the E-Commerce Directive restrict the extent to which those that act as intermediary hosts of illegal content can be held liable for that material.
"Social media companies are not held legally liable for any illegal content, as they are likely to fall within the ‘hosting’ exemption, where the provider’s relationship to that content as a host is considered merely ‘technical, automatic or passive’," the Committee said. "The hosting exemption requires that the company does not have knowledge of the illegal activity or information, and removes or disables access to it ‘expeditiously’ if it becomes aware of it. This has formed the basis for what is called the ‘notice and takedown’ model."
"Member states are prohibited from imposing a general monitoring duty on service providers in Article 15 of the Directive. This means that social media companies are legally envisaged to have a passive, rather than proactive, role in identifying and removing illegal content. When the UK leaves the EU, it will cease to have obligations under EU law. The government may then seek to tip the balance of liability for certain forms of illegal content towards social media companies." it said.
The Committee said that the emergence of new technology means that it "no longer requires disproportionate effort or expense" from social media companies to remove or block access to content on their platforms. It called on parliament to "reconsider the balance of liability for social media content".
The Committee said: "This does not mean that the social media companies should be considered fully to be the publishers of the content on their sites. Nor should they be merely platforms, as social media companies use algorithms that analyse and select content on a number of unknown and commercially confidential factors. These out-dated categories must be reconsidered to recognise the changing nature of the creation, ownership and curation of online content and communications."
"Revising this legal framework which applies to the social media companies would incentivise the prompt, automated identification of illegal content. This would have a positive impact on combatting the intimidatory tone of online political discussions," it said.
Changes in the law would help "remove the current perverse incentives for companies to avoid any form of active moderation using machine learning", the Committee said.
Lord Bew, chair of the Committee on Standards in Public Life, said: "A significant proportion of candidates at the 2017 general election experienced harassment, abuse and intimidation. There has been persistent, vile and shocking abuse, threatened violence including sexual violence, and damage to property. It is clear that much of this behaviour is targeted at certain groups. The widespread use of social media platforms is the most significant factor driving the behaviour we are seeing."
Earlier this year, the European Commission published a new communication on tackling illegal content online. In the paper it said it did not believe platforms that acted in a proactive way to detect and remove illegal content, published on their sites by others, would lose their protection from liability for that content under the terms of the E-Commerce Directive.
The Commission backed the use of technologies that enable platforms to automatically detect illegal content, including material that has been previously uploaded and removed.