In a new article for British Journalism Review, Steven Barnett argus that while the government is right to want to protect the vulnerable from harmful social media, it is going about it in the wrong way. Barnett suggests there are four reasons why the well-intentioned journalism exemptions create a paradox at the heart of the Bill: they could easily be exploited to allow through harmful and dangerous material from extremist or conspiracy publications, or from hostile foreign states; there is no effective scrutiny for mainstream publishers whose content qualifies as harmful; journalistic content is explicitly protected anyway; and crucially, it creates a double standard which gives news publishers greater free expression rights than private citizens.
The full article can be read for free on the BJR website, with an extract reproduced below.
It‘s time to think again
At an Institute for Public Policy Research (IPPR) conference in June this year, one of the main architects of the Online Safety Bill, Dr Lorna Woods, admitted that its core principle started on the back of a Pret a Manger napkin. To be fair to Dr Woods, neither she nor her conceptual partner from the Carnegie Trust, Will Perrin, will have anticipated the sprawling 213- page 12-part Bill that was put on hold in July following Boris Johnson’s defenestration. Our newly installed prime minister will have some difficult decisions to make on how – and even whether – to take it forward. As it stands, the Bill is not fit for purpose.
Most are agreed that there is little wrong with that original core principle: to impose a duty of care on tech platforms to protect the more vulnerable – and particularly children – from some of the most harmful effects of the largest social media and search platforms, Facebook, Twitter, Google and their like. There has been enough evidence of damaging consequences from online bullying, proliferating self-harm and eating disorder sites, violent and misogynistic pornography, and profoundly disturbing homophobic, islamophobic and antisemitic abuse to concern all of us.
On two of the three main conceptual components of the Bill, there is broad agreement: tech platforms and search services must, on pain of sanctions for non-compliance, have clear and effective processes in place for taking down illegal material and material that is harmful to children. There are further provisions to deal with online scams and for combating the kind of online anonymity so beloved of trolls – although how tech companies are supposed to do this without compromising genuine whistleblowers is one of many unknowns.
It is the third component, however, that has created an unbridgeable gap between the internet-safety advocates and the free speech champions: what about material that is legal but harmful (otherwise known in colloquial terms as “awful but lawful”)? Here the water becomes not so much muddy as resembling dangerous quicksand. Harmful is defined as material which could cause “physical or psychological harm”, which in the Bill’s first iteration was up to tech platforms to determine. Worried by the prospect of giving Messrs Zuckerberg et al too much power to censor speech, this was changed to allow government, with Parliament’s approval, to decide what content meets this threshold. So ministers will have the power quickly to define or add new categories of “harmful” content – that must be addressed by the policies of platforms and search services – through secondary legislation, with little notice or scrutiny.
.
Photo by Jay Wennington on Unsplash