Instagram algorithm boosted ‘vast pedophile network’: report

Instagram’s recommendation algorithms linked and even promoted a “vast pedophile network” that advertised the sale of illicit “child-sex material” on the platform, according to the findings of an alarming report Wednesday.

Instagram allowed users to search by hashtags related to child-sex abuse, including graphic terms such as #pedowhore, #preteensex, #pedobait and #mnsfw — the latter an acronym meaning “minors not safe for work,” researchers at Stanford University and the University of Massachusetts Amherst told the Wall Street Journal.

The hashtags directed users to accounts that purportedly offered to sell pedophilic materials via “menus” of content, including videos of children harming themselves or committing acts of bestiality, the researchers said.

Some accounts allowed buyers to “commission specific acts” or arrange “meet ups,” the Journal said.

When reached for comment by The Post, a spokesperson for Instagram’s parent company, Meta, said it has since restricted the use of “thousands of additional search terms and hashtags on Instagram.”


The company said it disabled more than 490,000 accounts that violated its child safety policies in January.
REUTERS

The spokesperson added that Meta is “continuously exploring ways to actively defend against this behavior” and has “set up an internal task force to investigate these claims and immediately address them.”

“Child exploitation is a horrific crime,” a Meta spokesperson said in a statement. “We work aggressively to fight it on and off our platforms, and to support law enforcement in its efforts to arrest and prosecute the criminals behind it.”

Meta pointed to its extensive enforcement efforts related to child exploitation.

The company said it disabled more than 490,000 accounts that violated its child safety policies in January and blocked more than 29,000 devices for policy violations between May 27 and June 2.

Meta also took down 27 networks that spread abusive content on its platforms from 2020 to 2022.

The Journal noted that researchers at both Stanford and UMass Amherst discovered “large-scale communities promoting criminal sex abuse” on Instagram.


Creep
Instagram allowed users to search a number of explicit hashtags related to child-sex content.
Getty Images/iStockphoto

When the researchers set up test accounts to observe the network, they began receiving “suggested for you” recommendations to other accounts that purportedly promoted pedophilia or linked to outside websites.

“That a team of three academics with limited access could find such a huge network should set off alarms at Meta,” Alex Stamos, the head of the Stanford Internet Observatory and Meta’s former chief security officer, told the Journal.

Stamos called for Meta to “reinvest in human investigators.”

Meta said it already hires specialists from law enforcement and collaborates with child safety experts to ensure its methods for combating child exploitation are up to date.

In another bizarre development, the Journal noted that Instagram’s algorithm had previously sent users a pop-up notification warning that certain searches on the platform would yield results that “may contain images of child sexual abuse.”

The screen purportedly directed users to either “get resources” on the topic or “see results anyway.”

The report said Meta disabled the option allowing users to view the results anyway, but has declined to reveal why it was ever offered in the first place.

The Journal’s findings came as Meta and other social media platforms face ongoing scrutiny over their efforts to police and prevent the spread of abusive content on their platforms.

In April, a group of law enforcement agencies that included the FBI and Interpol warned that Meta’s plans to expand end-to-end encryption on its platforms could effectively “blindfold” the company from detecting harmful content related to child sex abuse.


Source by [New York Post]

Leave a Reply