Twitter’s war against QAnon may be paying off.

In a tweet on Thursday, Twitter said that since taking action against the far-right conspiracy theory group QAnon two months ago, impressions on that kind of content have dropped by more than half due to its policies aimed at reducing “coordinated harmful activity” on the platform.

“In July, we began removing Tweets associated with QAnon from Trends and recommendations, and not highlighting them in conversations and Search,” the company said Thursday. “Impressions on this content dropped by more than 50%, decreasing the amount of unhealthy and harmful content on timelines.”

Yoel Roth, Twitter’s head of site integrity, said in a tweet, “Removing harmful content from recommendations and amplification surfaces works. It takes the wind out of the sails of how this content propagates across Twitter.”

He continued, “These are encouraging results, and we’re going to continue to invest in building out our approach.”

QAnon, which originated from the dark web message boards of 4chan, is a conspiracy alleging — without proof — that President Donald Trump is waging a secret battle against Satanic child abusers, most often prominent Democrats and liberal celebrities.

The once-fringe cult has since moved its messaging and baseless rhetoric to popular sites like Facebook, YouTube, and TikTok.

At times, the group has been able to capture nationwide attention by promoting misinformation and manipulating media. The group has reignited 2016’s “Pizzagate” conspiracy, and was responsible for spreading a fake theory about furniture company Wayfair earlier this summer.

Twitter was the first of the major social media companies to take direct, targeted action against the group in July, removing thousands of accounts and pledging to ban QAnon-related hashtags and topics from appearing in its “Trending” section.

In July, we began removing Tweets associated with QAnon from Trends and recommendations and not highlighting them in conversations and Search.

Impressions on this content dropped by more than 50%, decreasing the amount of unhealthy and harmful content on timelines. (2/3)

— Support (@Support) September 17, 2020

Thursday’s announcement that engagement on QAnon-related content has been cut in half is a clear sign that content moderation could help quell the spread of misinformation that could lead to real-world violence.

Twitter’s response to misinformation and hate speech in recent months has been a stark contrast to its previous hands-off policy.

In May, the company took action against Trump for the first time after he shared an inaccurate tweet about mail-in voting, and has since continued to moderate his tweets. The president has on some occasions retweeted known QAnon-related accounts.

However, QAnon content still exists on Twitter, and the company doesn’t plan on banning all of it. Supporters who are familiar with the group’s most-used hashtags and keywords can easily find its content on the platform.

Related Posts

WhatsApp has begun testing a long-overdue group chat feature

The Meta-owned messaging platform is testing a new feature called "group chat history sharing" (via a WABetaInfo report). As the name suggests, the feature lets a WhatsApp user (likely the admin) share the chat history (up to 100 messages sent within 14 days) with someone while adding them to a group.

You can now choose the kind of content you see on Instagram Reels

The announcement came from Instagram CEO Adam Mosseri, giving people a more direct way to shape the kind of videos they actually want to see. At its core, Your Algorithm lets users actively tune their Reels experience.

New UK under-5 screen time guidance targets passive time, what it changes for you

The push is rooted in government-commissioned research that links the highest screen use in two-year-olds, around five hours a day, with weaker vocabulary than peers closer to 44 minutes a day. Screens are already close to universal at age two, so the guidance is being framed as help you can actually use, not a ban.