San Francisco: Facing flak after the New Zealand terror attack that was livestreamed on its platform, the social media giant is now exploring restrictions on who can use its “Facebook Live” feature.
Facebook came under pressure after a white man livestreamed a terror attack on two mosques in Christchurch on Facebook Live.
The video of the terror attack in which 50 people were killed was viewed over 4,000 times before it was removed.
“We are exploring who can go Live depending on factors such as prior ‘Community Standard’ violations.
“We are also investing in research to build better technology to quickly identify edited versions of violent videos and images and prevent people from re-sharing these versions,” Sheryl Sandberg, Chief Operating Officer (COO) at Facebook, said in a statement late Friday.
In the immediate aftermath, Facebook had taken down the terrorist’s Facebook and Instagram accounts, removed the video of the attack, and used Artificial Intelligence to find and prevent related videos from being posted.
New Zealand Prime Minister Jacinda Ardern said on March 28 that Facebook needed to do a lot more to curb hate propaganda.
“We have heard feedback that we must do more — and we agree,” said Sandberg.
“In the wake of the terrorist attack, we are taking three steps: strengthening the rules for using Facebook Live; taking further steps to address hate on our platforms; and supporting the New Zealand community,” she added.
While the original New Zealand attack video was shared Live, the video spread mainly through people re-sharing it and re-editing it.