Many countries are tightening their measures to clamp down on social media platforms under the pretext of protecting users from materials they consider “harmful”, coinciding with the completion of the steps of American billionaire Elon Musk’s purchase of Twitter.
The Wall Street Journal said regulators from Australia to the European Union, India, Canada and the United Kingdom have introduced, or are considering introducing, new rules to monitor online content.
These rules include a new European Union requirement for major platforms to conduct annual risk assessments, and a new commitment in Australia to remove content quickly upon notice from the country’s cyber safety commissioner.
On Monday, Musk, whose $44 billion bid for Twitter was accepted last month, indicated that his plans for the platform would be in line with new EU rules.
But other countries, such as China, could put Musk’s pledge to abide by the laws of the countries in which Twitter operates or limit, according to the newspaper.
Musk has close ties to Beijing, which he used to build Tesla’s business in China.
It remains unclear how Twitter’s content moderation rules work under Musk. He has indicated that he feels the platform has sometimes gone too far.
On Tuesday, he said he would reverse a 2021 decision to ban Donald Trump’s personal Twitter account.
He called the move a “morally bad decision” and said the permanent ban was undermining confidence in the company.
“If there are false and bad tweets, they should either be deleted or made invisible, and the account can be suspended temporarily, but not permanently,” Musk said.
Musk has indicated in recent weeks that any policy changes would be in line with local laws, writing on Twitter on Monday that he preferred to “get close to the laws of the countries” in which the company operates. He continued, “If citizens want a ban, pass a law to do so.”
The European Union’s Digital Services Act, which lawmakers approved in April, would force platforms to quickly process illegal content and allow users to file a complaint if they disagree with moderation decisions.
Major platforms will also have to show regulators that they are taking steps to deal with the risks posed by certain legal content.
The legislation was introduced after some EU countries had already introduced their own regulatory changes. For example, Germany for several years has required platforms to quickly remove illegal content, including hate speech, and threaten companies with heavy fines.
Fines for violating new EU rules could be up to 6% of global revenue for the main platform once the legislation goes into effect.
Repeated serious infractions could lead to a business ban in the European Union, according to lawmakers involved in the final deal.
Similar legislation has been proposed in the UK, where in addition to imposing new rules for dealing with illegal content, it seeks to force major online platforms to address specific categories of content in their terms and conditions, such as material that encourages self-harm or eating disorders.
A new law in Australia goes further, allowing an eSafety commissioner to order platforms to remove certain content or face a fine.
The law, which took effect in January, gives the commissioner the power to require online service providers to remove offensive or seriously harmful content within 24 hours once they are provided with official notice.
Otherwise, the platform could be fined up to A$555,000, equivalent to about $385,000.
In India, last year the government unveiled a new set of guidelines that require social media platforms such as Twitter and Facebook to create systems to resolve user complaints about online posts.
Platforms also need to provide government contact information for internal grievance officers.
Twitter has clashed repeatedly with the Indian government in the past, including over the government’s request to reinstate the ban on accounts linked to tweets about farmers’ protests that the government said were inflammatory.
The company said at the time that the restrictions would not apply to journalists, media entities, activists and politicians because it believed doing so would violate their right to freedom of expression under Indian law.
Indian police visited Twitter’s New Delhi office in 2021 to investigate the company’s classification of tweets from a ruling party spokesperson as misleading.
In the US, lawmakers have made multiple proposals to tackle online content, and one bill, introduced in February, seeks to force companies to assess how algorithms and other digital features are harming children.
The Canadian government has also pledged to introduce legislation to tackle online content such as hate speech and child exploitation. It recently set up an expert panel to advise after an earlier proposal was widely criticized.