Post by zenner on Jun 23, 2017 12:30:04 GMT
FACEBOOK has started deploying its artificial intelligence capabilities to help combat terrorists’ use of its service.
Company officials said in a blog post overnight that Facebook will use AI in conjunction with human reviewers to find and remove “terrorist content” immediately, before other users see it. Such technology is already used to block child pornography from Facebook and other services such as YouTube, but Facebook had been reluctant about applying it to other potentially less clear-cut uses. In most cases, Facebook only removes objectionable material if users first report it.
The move by Facebook comes just days after both Malcolm Turnbull and opposition leader Bill Shorten used parliamentary speeches on national security to call on big tech companies to do more to thwart terrorists groups using their platforms. It’s been a common theme from governments in the West in recent months.
Facebook and other internet companies face growing government pressure to identify and prevent the spread of terrorist propaganda and recruiting messages on their services. Earlier this month, British Prime Minister Theresa May called on governments to form international agreements to prevent the spread of extremism online. Some proposed measures would hold companies legally accountable for the material posted on their sites.
The Facebook post — by Monika Bickert, director of global policy management, and Brian Fishman, counter-terrorism policy manager — did not specifically mention calls from world leaders. But it acknowledged that “in the wake of recent terror attacks, people have questioned the role of tech companies in fighting terrorism online”.
“We want to answer those questions head on. We agree with those who say that social media should not be a place where terrorists have a voice,” they wrote. Among the AI techniques used in this effort are image matching, which compares photos and videos people upload to Facebook to “known” terrorism images or video.
Matches generally mean either that Facebook had previously removed that material, or that it had ended up in a database of such images that Facebook shares with Microsoft, Twitter and YouTube.
Microsoft Customer Service
Company officials said in a blog post overnight that Facebook will use AI in conjunction with human reviewers to find and remove “terrorist content” immediately, before other users see it. Such technology is already used to block child pornography from Facebook and other services such as YouTube, but Facebook had been reluctant about applying it to other potentially less clear-cut uses. In most cases, Facebook only removes objectionable material if users first report it.
The move by Facebook comes just days after both Malcolm Turnbull and opposition leader Bill Shorten used parliamentary speeches on national security to call on big tech companies to do more to thwart terrorists groups using their platforms. It’s been a common theme from governments in the West in recent months.
Facebook and other internet companies face growing government pressure to identify and prevent the spread of terrorist propaganda and recruiting messages on their services. Earlier this month, British Prime Minister Theresa May called on governments to form international agreements to prevent the spread of extremism online. Some proposed measures would hold companies legally accountable for the material posted on their sites.
The Facebook post — by Monika Bickert, director of global policy management, and Brian Fishman, counter-terrorism policy manager — did not specifically mention calls from world leaders. But it acknowledged that “in the wake of recent terror attacks, people have questioned the role of tech companies in fighting terrorism online”.
“We want to answer those questions head on. We agree with those who say that social media should not be a place where terrorists have a voice,” they wrote. Among the AI techniques used in this effort are image matching, which compares photos and videos people upload to Facebook to “known” terrorism images or video.
Matches generally mean either that Facebook had previously removed that material, or that it had ended up in a database of such images that Facebook shares with Microsoft, Twitter and YouTube.
Microsoft Customer Service