Platform governance at the margins: Social media creators’ experiences with algorithmic (in)visibility

Abstract
While champions of the “new” creative economy consistently hype the career possibilities furnished by YouTube, Instagram, TikTok, and the like, critics have cast a spotlight on the less auspicious elements of platform-dependent creative labor: exploitation, insecurity, and a culture of overwork. Social media creators are, moreover, beholden to the vagaries of platforms’ “inscrutable” socio-technical systems, particularly the algorithms that enable (or – conversely – thwart) their visibility. This article draws upon in-depth interviews with 30 social media creators – sampled from historically marginalized identities and/or stigmatized content genres – to explore their perceptions of, and experiences with, algorithmic (in)visibility. Together, their accounts evince a shared understanding that platforms enact governance unevenly – be it through formal (human and/or automated content moderation) or informal (shadowbans, biased algorithmic boosts) means. Creators’ understandings are implicated in experiential practices ranging from self-censorship to concerted efforts to circumvent algorithmic intervention. In closing, we consider how the regimes of discipline and punishment that structure the social media economy systematically disadvantage marginalized creators and cultural expressions deemed non-normative.
Funding Information
  • Cornell Center for Social Sciences (“Algorithms, Big Data, and Inequality”)

This publication has 43 references indexed in Scilit: