As the number of terrorist attacks continue to increase globally, Facebook is making an attempt to be completely transparent about its plans to keep terrorist content off its website. To make efforts more efficient, the company enlisted the help of both artificial intelligence and human expertise.

To kick off the initiative, Facebook introduced a thread called “Hard Questions” as a safe space to discuss complicated subjects. The first post within the thread is titled “How We Counter Terrorism” and written by Monika Bickert, Facebook’s director of global policy management, and Brian Fishman, counterterrorism policy manager, who explain in detail how Facebook is committed to making the platform a hostile environment for terrorists.

The post lists a number of current tactics that use AI, including image matching — where systems search for whether or not an uploaded image matches any terrorism content previously removed by Facebook — to prevent other accounts from posting the same photo or video. Another experiment Facebook is currently running involves analyzing text previously removed for supporting terrorist organizations in an effort to create text-based signals. This will help to strengthen the algorithm in place so it catches similar posts at a quicker speed.

To prevent AI from flagging a photo related to terrorism in a post like a news story, human judgment is still required. In order to ensure constant monitoring, the community operations team works 24 hours a day and its members are also skilled in dozens of languages. The company also added more people to its team of terrorism and safety specialists specifically — ranging from former prosecutors to engineers — whose responsibility is to concentrate solely on countering terrorism.

Facebook will continue to see employee growth after CEO Mark Zuckerberg announced plans to expand the community operations team by adding 3,000 more employees across the globe — this decision came after a string of violent deaths and incidents were broadcast over Facebook Live. By having a larger team of reviewers, Zuckerberg pointed out that inappropriate content can be taken down faster and the response rate for those in danger can potentially be higher.

The company continues to develop different partnerships with researchers, governments, and other companies — including Microsoft, YouTube, and Twitter. These businesses continuously contribute to a database that’s specifically meant for gathering terrorist content.

Related Posts

OnePlus 15T leak spills details on a curious camera situation

According to the Chinese tipster Digital Chat Station (via Weibo), a "small-screen phone powered by the Snapdragon 8E5 is ready," translated from simplified Chinese. This phone, believed to be the OnePlus 15T, could feature a dual-camera setup "with a 50MP main sensor and a 50MP telephoto lens."

WhatsApp has begun testing a long-overdue group chat feature

The Meta-owned messaging platform is testing a new feature called "group chat history sharing" (via a WABetaInfo report). As the name suggests, the feature lets a WhatsApp user (likely the admin) share the chat history (up to 100 messages sent within 14 days) with someone while adding them to a group.

Google Photos introduces a fun new way to turn yourself into a meme

According to a recent post on Google's support forums, Me Meme is a generative AI feature that lets you star in trending memes using a template and a photo of yourself. It's rolling out in Google Photos for Android in the US, and you can try it out by tapping the "Create" button and selecting the new "Me meme" option.