Algorithmic Discrimination in Online Spaces

Silicon Valley engineers and programmers create the flow of information people engage with online. Whether it is a curated newsfeed or timeline on social media, personalized search results or recommendations from large online retailers, websites and apps collect a lot of data about people’s habits, values, and actions online. The collection of big data is a multi-billion dollar industry. It’s becoming commonplace to use such data in connection with employment, health care, policing, purchases, and even housing.

And, it’s not human beings who are routinely making quick decisions on whether to extend credit to an individual or to hire a person based on their social media profile. Instead, it’s computers. More specifically, computer algorithms.

But, are algorithms objective purveyors of truth? Can algorithms accurately predict future outcomes based on previous trends, without bias?

There is a common understanding among people that algorithms are neutral or objective. Perhaps this is due, in part, because of the mathematical properties of computer algorithms. However, people write and program algorithms; thus, the complex equations are not free of bias or human influence.

This means computer algorithms can discriminate and affect real changes in people’s everyday lives.

What’s worse is the blurring of social and legal boundaries when algorithms discriminate, because there is little regulatory oversight or legal protection for citizens when algorithms do discriminate.

As a digital rhetoric and writing/media studies educator, each time I ask students to get online and click around, I am forced to think about their digital data trail. When students Google or use collaborative document sharing, I wonder about how their data is tracked — and sold to advertisers.

More importantly, I reflect upon the best educational practices for teaching students about algorithms, tracking technologies, and algorithmic discrimination.

Because computer algorithms are exceedingly complex, I’m not necessarily inclined to teach students a literacy for algorithmic calculations.

Instead, I am more inclined to integrate activism in coursework, encouraging students to speak up and out about the legal and social affects of algorithmic discrimination to seek regulatory and legal protections.

However, this is only one model for integrating discussions about algorithms in classrooms. What models might work for you, within your own classroom/department/institution? What assignments and recommendations might offer students opportunity to learn about algorithms? How will your guidance help prepare students for the shifting online information economy?

Comments

Estee, I'm so glad you commented that "each time I ask students to get online and click around, I am forced to think about their digital data trail." I wonder about this sometimes when I ask students to participate in digital spaces: What will happen as a result of my request? Will I have the time and knowledge to share with them some of the potential side effects or consequences of their participation? I even think about the thousands of dead websites littering the web, a few created by my own past students--sites still lingering, no longer updated, but still part of the vast and searchable expanse of the online world.

I don't know that we're doing enough to think about sustainability when we ask students to participate online. And I don't think we're asking them, as you suggest here, to do enough activist work related to the legal and social effects of algorithmic discrimination. It's a fantastic idea. For example, I was just speaking with someone recently about how the Facebook photo tool to support Paris by turning your profile picture blue-white-and-red is of course potentially discriminatory--what about other causes that don't have an easy change-your-profile-picture button? Why not support Syria? Why not Libya? Why not Beirut? As you note here, "people write and program algorithms; thus, the complex equations are not free of bias or human influence." These technological systems of course reflect the politics of the individuals who helped code them, but you're calling for us to do more to speak up against the racism, classism, sexism, ableism, and so on that can be coded into the interfaces and algorithms that surround us.

It's an excellent point, and I agree. A timely assignment right now would be to ask Facebook coders how decisions are made to support certain causes (such as Digital India or Celebrate Pride) and not others. To ask what the constraints are for something like Facebook Safety Check, turned on when a natural disaster strikes (but natural disaster when and where, and what kind). To ask when the designers decide to co-opt or modify something like the natural viral spread of an Internet meme like the Human Rights Campaign meme. Asking about how and when and why those decisions get made would not only be an amazing learning experience, but it could have actual impact (as Facebook has made changes in the past to its interface when enough users complained). And that's a fantastic opportunity for students to see rhetoric at work in the world.

These are fantastic prompts to include in classroom discussion, Stephanie. Speaking of Facebook coders, I've been interested (for some time) to learn how the ALS ice bucket challenge received so much attention within the space at a time when the unfortunate death of Michael Brown in Ferguson, MO provided a touchstone for #blacklivesmatter on Twitter. It would be fascinating (as as class project) to reach out to Facebook representatives to learn (in what ways the company will publically share) how the site's algorithms prioritize certain trends over others. Thank you for the reply. This has got me thinking! 

Add new comment

Log in or register to add a comment.