Canada is among 17 countries and eight technology companies that today endorsed the so-called Christchurch Call to Action for governments and industry to improve fighting the spread of terrorist and extremist content online. The signings took place in Paris.
The non-binding declaration is named after a call to action following the March 15 shootings in two mosques in the New Zealand city, where the shooter was able to live stream his actions across social media.
It doesn’t ask governments to do specific things, like pass laws or regulations. However, they do promise to ensure effective enforcement of applicable laws and encourage media outlets to apply ethical standards when writing about terrorist events.
The declaration doesn’t include the signature of the United States, home to some of the biggest social media platforms in the world. “While the United States is not currently in a position to join the endorsement, we continue to support the overall goals reflected in the call. We will continue to engage governments, industry and civil society to counter terrorist content on the internet,” CNN quoted the White House as saying.Â
At a press conference French president Emmanuel Macron noted a number of nations have pledged in the last year or two to combat a range online threats. This declaration widens the number of countries.
“The aim of the Christchurch Call is to be more specific about the removal and elimination of terrorist and violent extremist content online,” he added.
“We are setting a common framework for Internet companies to fight against terrorism,” he said, and there will be a meeting in October on how to implement steps to take down offending content.
“It is unprecedented that we have both tech companies and countries” making commitments, said New Zealand Prime Minister Jacinda Arden, co-host of Wednesday’s conference with Macron.
Other groups have made promises to fight specific threats, like ISIS, she said. The Christchurch Call is broader, going after violent extremism.
It also urges signatories to not only eliminate offending content, she noted, but also to examine algorithms that platforms use to rank platforms — there are allegations some algorithms favor “controversial” content — and seek better global crisis management for taking down violent content before it spreads around the world.
A secretariat will be set up to measure progress.
The countries that signed the pledge and have a leader present at the meeting are France, New Zealand, Canada, Indonesia, Ireland, Jordan, Norway, Senegal, the UK, as well as the European Commission. Countries not present but signing on are Australia, Germany, Japan, Italy, India, the Netherlands, Spain and Sweden.
Amazon, Facebook, Dailymotion, Google, Microsoft, Qwant, Twitter, and YouTube have also signed.
This is one of a number of international efforts. In advance of the annual meeting of G7 nations (this year in August in France), government officials held a pre-conference meeting Wednesday to go over digital-related issues to be discussed, including ways to combat dangerous online content.
With the encouragement of the G7, tech companies have formed the Global Internet Forum to Counter Terrorism (GIFCT), to work together to counter violent extremist and terrorist use of the Internet. The EU Internet Forum has created indicators for content removal to guide Intenet providers on their actions.
In December, Canada announced a national strategy to combat radicalization to violence, which focuses on education and early prevention. It also includes promises to work with technology companies, academic researchers and civil society on finding how online radicalization to violence can be prevented.
In the Christchurch Call to Action participants commit to
Counter the drivers of terrorism and violent extremism by strengthening the resilience and inclusiveness of our societies to enable them to resist terrorist and violent extremist ideologies, including through education, building media literacy to help counter distorted terrorist and violent extremist narratives, and the fight against inequality.
Ensure effective enforcement of applicable laws that prohibit the production or dissemination of terrorist and violent extremist content, in a manner consistent with the rule of law and international human rights law, including freedom of expression.
Encourage media outlets to apply ethical standards when depicting terrorist events online, to avoid amplifying terrorist and violent extremist content.
Support frameworks, such as industry standards, to ensure that reporting on terrorist attacks does not amplify terrorist and violent extremist content, without prejudice to responsible coverage of terrorism and violent extremism.
Consider appropriate action to prevent the use of online services to disseminate terrorist and violent extremist content, including through collaborative actions, such as:
- Awareness-raising and capacity-building activities aimed at smaller online service providers;
- Development of industry standards or voluntary frameworks;
- Regulatory or policy measures consistent with a free, open and secure internet and international human rights law.
Technology companies commit to
–take transparent, specific measures seeking to prevent the upload of terrorist and violent extremist content and to prevent its dissemination on social media and similar content-sharing services, including its immediate and permanent removal, without prejudice to law enforcement and user appeals requirements, in a manner consistent with human rights and fundamental freedoms;
— provide greater transparency in the setting of community standards or terms of service, including outlining and publishing the consequences of sharing terrorist and violent extremist content;
–and review algorithms and other processes that may drive users towards and/or amplify terrorist and violent extremist content.
Ahead of Wednesday’s meeting in Paris, on Tuesday Facebook said people who have broken rules on the platform — including its Dangerous Organizations and Individuals policy — will be restricted from using Facebook Live for set periods of time.  For example, someone who shares a link to a statement from a terrorist group with no context will be immediately blocked from using Live for time.
Facebook will extend these restrictions to other areas over the coming weeks, the company added, beginning with preventing those same offenders from creating ads on Facebook.
It will also invest US$7.5 million in new research partnerships with academics from three universities to improve image and video analysis technology to better catch offensive content.
These moves come after Facebook was criticized for not acting fast enough to cut live a streaming broadcast from the man who killed and wounded people in an attack on two Muslim congregation in Christchurch, N.Z. in March.