Instagram users in the U.S. will soon be able to report content they deem to be fake as the Facebook-owned app joins the fight against online misinformation, Reuters reported.
The new feature, announced Thursday, will allow users of the photo and video-sharing app to flag suspicious posts. Flagged content will no longer show up in Instagram’s “explore” page, and will also be removed from hashtag search results.
Stephanie Otway, a Facebook company spokeswoman, said the feature is “an initial step as we work towards a more comprehensive approach to tackling misinformation.”
The app has previously made other efforts to crack down on misinformation — in May, Instagram introduced image-blocking technology to detect content that had been debunked on Facebook, which then would have its reach similarly limited.
Social media sites have become fertile ground for peddling online misinformation, which has been on the rise and blamed for spreading falsehoods including medical hoaxes and fictitious political content ahead of elections.
Instagram’s efforts to fight misinformation lag behind those of Facebook, which has 54 fact-checking partners working in 42 languages and last year removed more than 500 pages spreading false content.
A report commissioned by the Senate Intelligence Committee found that Russian actors received more engagement on Instagram than Facebook around the time of the 2016 elections, suggesting that Instagram was a bigger tool at election time.
The report said Instagram is “likely to be a key battleground on an ongoing basis” and could play a crucial role in the 2020 elections.