Facebook is teaming up with some of its biggest tech industry counterparts in order to combat the spread of extremist content on the web.
On Monday, the company announced that along with Twitter, Microsoft, and YouTube it will begin contributing to a shared database devoted to “violent terrorist” material found on the respective platforms.
The compiled content itself will be identified using “hashes” — unique digital “fingerprints” — with the hopes that the sharing of this type of data will lead to a streamlining of the removal process across the web’s biggest services.
In its blog post, Facebook describes the items being targeted as: “hashes of the most extreme and egregious terrorist images and videos … content most likely to violate all of our respective companies’ content policies.”
Theoretically, once a participating firm adds an identified hash of an extremist image or video to the database, another company can use that unique data to detect the same content on its own platform and remove it accordingly.
Facebook assures its users that no personal information will be shared, and corresponding content will not immediately be removed. Ultimately, the decision to delete content that matches a hash will rest on the respective company and the policies it has in place. Additionally, each firm will continue to apply its practice of transparency to the database and its individual review process for government requests. Facebook claims that more partners for the tool will be sought in the future.
Over the past year, the web giants in question have all faced public pressure to tackle extremist content online. At the start of the year, execs from Google, Twitter, and Facebook met with White House officials to discuss the issue.
Facebook and Twitter have also been hit with lawsuits regarding their alleged inaction against terrorist groups operating on their respective sites. In response, the latter has banned 325,000 accounts since mid-2015 for promoting extremism. For its part, Google began showing targeted anti-radicalization links via its search engine. Meanwhile, in May, Microsoft unveiled a slew of new policies in its bid to remove extremist content from its consumer services.
“Throughout this collaboration, we are committed to protecting our users’ privacy and their ability to express themselves freely and safely on our platforms,” Facebook wrote in its post. “We also seek to engage with the wider community of interested stakeholders in a transparent, thoughtful, and responsible way as we further our shared objective to prevent the spread of terrorist content online while respecting human rights.”
Related Posts
WhatsApp has begun testing a long-overdue group chat feature
The Meta-owned messaging platform is testing a new feature called "group chat history sharing" (via a WABetaInfo report). As the name suggests, the feature lets a WhatsApp user (likely the admin) share the chat history (up to 100 messages sent within 14 days) with someone while adding them to a group.
You can now choose the kind of content you see on Instagram Reels
The announcement came from Instagram CEO Adam Mosseri, giving people a more direct way to shape the kind of videos they actually want to see. At its core, Your Algorithm lets users actively tune their Reels experience.
New UK under-5 screen time guidance targets passive time, what it changes for you
The push is rooted in government-commissioned research that links the highest screen use in two-year-olds, around five hours a day, with weaker vocabulary than peers closer to 44 minutes a day. Screens are already close to universal at age two, so the guidance is being framed as help you can actually use, not a ban.