Serving the GW Community since 1904

The GW Hatchet

AN INDEPENDENT STUDENT NEWSPAPER SERVING THE GW COMMUNITY SINCE 1904

The GW Hatchet

Serving the GW Community since 1904

The GW Hatchet

NEWSLETTER
Sign up for our twice-weekly newsletter!

Online hate stems from small platforms, researchers find

Pedestrians+pass+the+Science+and+Engineering+Hall+on+a+sunny+day.
File Photo by Sage Russell | Assistant Photo Editor
Pedestrians pass the Science and Engineering Hall on a sunny day.

Researchers found online hate develops on smaller social media platforms instead of mainstream ones in a study published earlier this month. 

Researchers found that online hate speech originates on smaller platforms like 4chan, Discord and Telegram by using mapping technology only available at GW to see how content connects to larger platforms like Facebook and X, formerly known as Twitter. Neil Johnson, a professor of physics and researcher for the study, said the findings challenge public understanding and policy efforts to control online hate speech, which focus on regulating content on larger platforms. 

Johnson said mainstream platforms are “passive receivers” of hate speech content because content does not originate there but instead on smaller platforms. Johnson said the findings contradict the conception among researchers and the public that large platforms are the main method for spreading online hate. 

“The assumption is the large platforms are the ones that create the largest problem, that somehow they’re the big influence, but they’re not,” Johnson said. 

To study the relationship, researchers created a comprehensive map of “adaptive linked dynamics,” large ecosystems of 30 smaller platforms that feed hate speech onto mainstream ones, according to the study. Johnson said users’ continued engagement on mainstream platforms with the content generated on smaller platforms strengthen these “hate highways.” Johnson said the researchers’ goal was to unravel the source of online hate speech. 

“That’s the largest map, it’s almost like the James Webb telescope of the online world regarding hate,” Johnson said. 

Courtesy of Neil Johnson

Researchers applied their model to the Jan. 6, 2021 insurrection at the U.S. Capitol and found hate speech on smaller platforms like 4chan and Telegram increased two days prior to the insurrection and originated from preexisting hate networks on these platforms.

Johnson said hate speech originates from thousands of online communities dedicated to conspiracy theories and extremism on smaller platforms that drive narratives amplified on the main platforms. He said this finding contradicts the idea that hate communities are isolated echo chambers because they are connected across smaller platforms and easily spread to larger audiences. 

Johnson said current efforts to regulate hate speech on social media like the European Union Digital Services Act — which requires certain platforms to provide data on content moderation and remove illegal speech — will fail because they only focus on regulating content on large platforms. He said lawmakers need to evaluate small and large platforms and proposed that company leaders could collaborate to sever network ties, keeping hateful content contained on smaller platforms.

“Until policymakers include this slew, this whole ecosystem of smaller platforms in the discussions, the problems of online harm, hate misinformation, disinformation, will not go away,” Johnson said. “In fact, they’ll get worse.” 

He said his previous work studying online hate “missed the point” because researchers exclusively focused on X, ignoring smaller platforms and online communities.

Richard Sear, a GW senior faculty programmer and data administrator, said he handled data management and collection for the study. Sear said they tracked “hub” communities like the “politically incorrect” group on 4chan, which are popular among hate speech followers. He said hate speech communities coordinate the spread of their content to the larger platforms and find ways to circumvent moderation. 

“They figure out ways of dodging the content moderation, the new policies that exist on the original platform,” Sear said. 

Sear said approaching hate speech from a data-driven perspective will help companies moderate their content more effectively because they will know where to look for it. He said platforms must understand the way hate communities are connected instead of playing “whack a mole” by guessing where the hateful speech is going to pop up. 

“It’s very important to constantly be taking a realistic and data driven approach to this stuff,” Sear said. “I think it’s very easy to get kind of bogged down in specific little cases.” 

Experts in the fields of media studies, hate speech and communication said the study’s findings raise questions to conventional research, but expressed doubts that hate speech can be easily controlled on smaller platforms. 

Joseph Walther, a distinguished professor of communication studies at the University of California, Santa Barbara, said he was “surprised” by the researchers’ findings. He said most of the research he studied focuses on an opposite trend, with hate speech users spreading content on smaller platforms only after they have been removed from larger platforms. 

“It’s a surprise to me,” Walther said. “There’s other research that shows the opposite direction, but what I find most fascinating in this field is cross platform hate behavior and the level of coordination that takes place between platforms.” 

Walther said he is skeptical about efforts trying to regulate the smaller platforms because their appeal to users is their lack of content moderation and unwavering support for “free expression.” He said he does not believe Congress would pass regulations for smaller platforms since the governing body is so divided and would need support from “ultraconservative” members. Walther added that in some states, like Texas and Florida, there have been efforts to prohibit any social media regulation. 

“All social media platforms will become an absolute cesspool of hatred in other forms of messages that many people will find extremely objectionable,” Walther said. “But several of the smaller networks wave the flag quite vociferously that they are operating and do not censor under the banner of free speech and free expression.” 

Caitlin Carlson, an associate professor of communication and media at Seattle University and the author of “Hate Speech,” said the researchers’ findings about social media platforms reflect a historical trend in larger media systems. Carlson said in television programming, far-right ideas start out on fringe websites and networks like Breitbart and One America News Network before being broadcast on channels with wide audiences like Fox News. 

“That sort of movement from kind of smaller more fringe outlets into the mainstream is what we’ve seen traditionally across media history,” Carlson said. 

Carlson said the researchers’ focus on communities instead of individuals explain why hate speech is more likely to occur on smaller platforms. She said communities on smaller platforms like 4chan and Gab provide anonymity not available on mainstream platforms like Facebook or LinkedIn because on small platforms people are often not friends with people they know in real life, and this makes the spread of hate speech more likely.  

“These people are not geographically close to one another, but they find these online communities and that’s a place where these kinds of ideas can proliferate,” Carlson said. 

More to Discover
Donate to The GW Hatchet