San Francisco: Facebook is going to be “strict” in removing false news and, going forward, will remove all content that could result in “real physical harm” or is “attacking individuals”.
chief executive Mark Zuckerberg said that “The principles that we have on what we remove from the service are: If it’s going to result in real harm, real physical harm, or if you are attacking individuals, then that content shouldn’t be on the platform. There’s a lot of categories of that, which we can get into, but then there’s broad debate,” Zuckerberg said in an interview with technology news website Recode.
” Facebook-owned WhatsApp is facing the flak in India for allowing the circulation of large number of irresponsible messages filled with rumours and provocation that has led to growing instances of lynching of innocent people.
In June, Facebook removed content that alleged Muslims in Sri Lanka were poisoning food given and sold to Buddhists.
A coalition of activists from eight countries, including India and Myanmar, in May called on Facebook to put in place a transparent and consistent approach to moderation.
In a statement, the coalition demanded civil rights and political bias audits into Facebook’s role in abetting human rights abuses, spreading misinformation and manipulation of democratic processes in their respective countries.
Besides India and Myanmar, the other countries that the activists represented were Bangladesh, Sri Lanka, Vietnam, the Philippines, Syria and Ethiopia.
The demands raised by the group bore significance as Facebook came under fire for its failure to stop the deluge of hate-filled posts against the disenfranchised Rohingya Muslim minority in Myanmar.
Sri Lanka temporarily shut down Facebook earlier in 2018 after hate speech spread on the company’s apps resulted in mob violence.
According to The Verge, Facebook will review posts that are inaccurate or misleading, and are created or shared with the intent of causing violence or physical harm.
The posts will be reviewed in partnership with firms in the particular country including threat intelligence agencies.
“Partners are asked to verify that the posts in question are false and could contribute to imminent violence or harm,” Facebook said.