The Draft Online Safety Bill: what is it, what does it cover and what will change?

The draft Online Safety Bill: what is it, what does it cover, and what will it change?

Back to Blog
What’s illegal offline should be regulated online. The Online Safety Bill is proposing new legislation that will make internet service providers responsible for what’s happening on their platforms.

Age Verification

8 March 2022

Fiona Jones

Content Lead for Digital Identity Net

Share on

What is the draft Online Safety Bill?

The Online Safety Bill is a government bill first published on 12 May 2021. It followed the Online Harms White Paper from 2019, which was a result of commitments made in the Internet Safety Strategy Green Paper in 2017 – so there’s a lot of background to this bill.  

A government bill is a proposal for a new law or a change in the existing law. Bills are put forward by the Government for consideration in Parliament. The Online Safety Bill is due to be put to Parliament for approval this year.  

The Online Safety Bill is proposing new legislation that will make internet service providers responsible for what’s happening on their platforms. This includes serious crimes like child abuse, fraud, racist abuse, promoting self-harm and violence against women.  

Recently, the Chair of the Joint Committee on the bill said, “The Committee were unanimous in their conclusion that we need to call time on the Wild West online. What’s illegal offline should be regulated online. For too long, big tech has gotten away with being the land of the lawless. A lack of regulation online has left too many people vulnerable to abuse, fraud, violence and in some cases even loss of life.” 

What does the Online Safety Bill cover? 

The draft bill covers ‘user-to-user services and search services’, mainly meaning social media platforms and messaging services. This means that regulators can penalise any of these services for not following the rules.

The laws currently relating to online communication pre-date the age of social media and needs updating.  

The objective of the bill is to tackle harmful content themes online, those listed include harms affecting children, racist abuse, abuse against LGBTQ+ people, misogynistic abuse and violence against women and girls, religious hate and antisemitism, abuse against disabled people, minimal impact on freedom of speech, and other factors exacerbating harms.  

This might seem like a lot, and it is. However, the need for this bill and a change to laws is evidenced throughout the documentation, with case studies including:  

  • A recap of the racist abuse suffered by members of the England football squad after missed penalties in the Euro 2020 final, alongside a personal testimony from Rio Ferdinand about the impact that racist abuse on social media has had on him and his family to highlight racist abuse online. The bill also noted that racism online is by no means isolated to high-profile individuals. 
  • The story of how fraudsters pretend to be Martin Lewis, founder of Moneysavingexpert.com, to entice victims, outlining the level of online fraud and lack of protection in this area. 
  • Witness examples of cyberflashing, where people are targeted with unsolicited sexual images. 
  • Many instances of online abuse against women and girls, aligning with the fact that 62 per cent of women aged 18–34 report having been a victim of online abuse and harassment. 
  • And much more including interference with elections, content promoting self-harm, and deepfake pornography.  

On Safer Internet Day, 8th February, it was announced that the age verification components of the bill would be strengthened.

Covering all sites where children are likely to be exposed to pornography, including commercial sites, user-generated content sites, and social media sites which do not successfully take down adult content uploaded, the new requirements would mean a legal duty for these sires to put ‘robust checks’ in place to ensure all users are over 18. 

A couple of weeks later, on the 25th of February further measures were added to the bill, addressing anonymous abuse on social media.

The official announcement stated, “The government recognises too many people currently experience online abuse and there are concerns that anonymity is fuelling this, with offenders having little to no fear of recrimination from either the platforms or law enforcement”.

Two duties were added to the bill. The first will force social media sites to give adults the ability to block people who have not verified their identity on a platform. The second will require platforms to provide users with options to opt-out of seeing harmful content. 

The bill accepts that ‘hate and harm existed before the internet, and still would without it, the evidence is that these systems and processes have actively made things worse’.  

Who wrote the Online Safety Bill? 

Many experts contributed to the production of the bill, and there are over 200 written submissions as well as 50 oral evidence hearings from witnesses who shared their experiences to inform the bill. Witnesses included past employees and whistle-blowers from Big Tech firms such as Facebook.   

What will the bill change? 

If the bill becomes law, it will mean changes ahead for many Big Tech companies. From safety by design to algorithmic promotion of content and anonymity rules, social media platforms and search engines will have to make some big changes to their platforms to escape prosecution.  

Much of the bill may sound unfamiliar to readers but will mean changes to very familiar online activity. For example, algorithmic promotion simply refers to how social media platforms promote content.

Jimmy Wales, the founder of Wikipedia, explained it well: “I do not have a crazy racist uncle, but we all know the stereotype, down at the pub spouting off nonsense to his mates. That is a problem, but it is not a problem requiring parliamentary scrutiny. When it becomes a problem is not that my crazy uncle posts his racist thoughts on Facebook, but that he ends up with 5,000 or 10,000 followers because everyone in the family yells at him and the algorithm detects, “Ooh, engagement”, and chases after that, and begins to promote it. That is a problem, and it is a really serious problem that is new and different.”

The bill uses this example alongside many others to outline that algorithmic promotion of content that stands to benefit the platform the most can ‘directly result in the amplification of content that creates a risk of harm’. 

Similarly, the bill proposes platforms can limit the sharing of harmful content through safety by design as a mitigation measure. All this means is putting in some measures to slow down the sharing of content that is harmful.

Renée DiResta, Technical Director at the Stanford Internet Observatory, explained how platforms can implement a circuit breaker, where content that has reached a certain level of sharing could be sent to teams within the platform who assess what is happening and if it has the potential to harm anyone.

Frances Haugen, an ex-Facebook employee, shared how even simpler design measures can slow the spread of harmful content. “Let us imagine that Alice posts something and Bob reshares it and Carol reshares it, and it lands in Dan’s news feed. If Dan had to copy and paste that to continue to share it, if the share button was greyed out, that is a two-hop reshare chain, and it has the same impact as the entire third party fact-checking system … ”. 

There is a lot to digest in the bill, but really the goal is simple: to make the internet a safer place. Part of this will mean putting more control into the hands of individuals, so they can choose what they do or don’t see online.  

If you’d like to find out more, access the entire draft Online Safety Bill here or find out the role of digital identity in making us feel safer online.