Twitch has its own “Off-Service” investigations team that it says will be tasked with reviewing streamers who potentially violate its misinformation policy. Accounts will only face bans if they meet the three criteria listed in the guidelines — namely, the misinformation must be shared by a user whose account is dedicated to the content, and that content must be both “widely disproved and broadly shared.” Beyond that, Twitch also has to determine the content is “harmful,” such as sharing fake COVID-19 treatments that may put unwitting viewers’ health at risk.
The company says it has teamed up with various experts and entities to help it tackle the problem, including researchers who have shed light on whether Twitch’s new policy will actually help reduce this type of content. Among other things, Twitch says it’ll work with the Global Disinformation Index and other entities to help it determine which content constitutes harmful misinformation. As with any account that violates Twitch’s overall community guidelines, the company can (and likely will) suspend any streamers who check each of the three boxes.
Of course, Twitch isn’t the only company cracking on this kind of content. In September 2021, for example, competitor YouTube started removing videos and banning accounts that spread false information about the COVID-19 vaccines. That effort likewise targeted content that could prove harmful to individuals and society, including ones spreading misinformation about the contents in a vaccine and their side effects.