Twitch is a streaming platform known primarily for streaming video games. As a platform, it allows for and encourages community building around particular identities. Twitch transforms a gaming experience into a social performance through interactions between the player, the spectator, and the game itself. You can follow lists of streamers who identify as LGBTQ, for instance, or you can find people of color who cultivate audiences based in the idea of safe spaces and community building.
This, however, also allows for the cultivation of communities that some might consider problematic, as Twitch live chat might often be full of sexist, racist, or homophobic comments. While some streamers employ strategies like comment moderators in order to control their streams, streamers themselves might also engage in and encourage discriminatory language and attitudes. Some of these streamers are approved partners of Twitch, and while the platform sometimes bans offenders, the ban hammer doesn’t come down on everyone equally. One recent example illustrates this unequal distribution, where a prominent streamer was temporarily banned for sexist hate speech, but his official Twitch Partner status was not revoked. Incidents like this arise because Twitch’s community guidelines are vague and constantly evolving, leaving room for streamers to develop their own community guidelines, but also push the boundaries of allowable content.
The problem of content moderation is not unique to Twitch, but can also be seen across internet platforms, such as Facebook, Twitter, and YouTube. When we think of community building, we often think of it in terms of marginalized communities and identities struggling to find spaces of their own. However, as seen in the trouble with moderating racist content, the platform is just as capable of promoting harmful attitudes as it is more liberatory attitudes. This tension is evidenced in the structure of the Twitch platform itself, as seen through its content moderation struggles and its privileged Partners program. Does Twitch benefit from purposely vague community guidelines that encourage harmful behavior or is it a supportive platform for marginalized people to build communities of play?
I like this piece as a
I like this piece as a continuation of critiques about both internet echo chambers and the supposed "avoidability" of hate speech on social platforms. It becomes very clear when more official titles such as Twitch Partnership are involved that, while there is a mythology of limitless social space in sites such as Twitch, Facebook, Twitter, etc., flimsy rules systems offer hateful streamers a central place in the formation of the community and the new adaptations of the site technology. The vagueness surrounding bans for these players relates back to the formative role Twitch has allowed them to serve for the site in the past, a role which has empowered like-minded streamers to take the place of the ones who are actually banned. This is not to say that video game spectatorship does not have an extremely expressed willingness to tolerate (and oftentimes a great penchant for) misogyny and racism, but I think we must also investigate the relationship between hate and the literal architecture of Twitch.
You are definitely right
You are definitely right about the need to investigate the relationships between these behaviors and Twitch's literal architecture. I think you also bring up an interesting point about the vagueness of bans and, more broadly, the vagueness of community guidelines. I'm inclined to think that platforms like Facebook, Twitter, and now probably Twitch, intentionally create vague community guidelines that play into received notions of community without having to take an active role in deciding what happens on the platform (to do so cuts into the bottom line). Facebook has already demonstrated that it will relax it's community guidelines so that it can enter new markets and, as you've noted, Twitch relies upon these players in order to bring in viewers and streamers. -Dan Lark
Add new comment