Silicon Valley engineers and programmers create the flow of information people engage with online. Whether it is a curated newsfeed or timeline on social media, personalized search results or recommendations from large online retailers, websites and apps collect a lot of data about people’s habits, values, and actions online. The collection of big data is a multi-billion dollar industry. It’s becoming commonplace to use such data in connection with employment, health care, policing, purchases, and even housing.
And, it’s not human beings who are routinely making quick decisions on whether to extend credit to an individual or to hire a person based on their social media profile. Instead, it’s computers. More specifically, computer algorithms.
But, are algorithms objective purveyors of truth? Can algorithms accurately predict future outcomes based on previous trends, without bias?
There is a common understanding among people that algorithms are neutral or objective. Perhaps this is due, in part, because of the mathematical properties of computer algorithms. However, people write and program algorithms; thus, the complex equations are not free of bias or human influence.
This means computer algorithms can discriminate and affect real changes in people’s everyday lives.
What’s worse is the blurring of social and legal boundaries when algorithms discriminate, because there is little regulatory oversight or legal protection for citizens when algorithms do discriminate.
As a digital rhetoric and writing/media studies educator, each time I ask students to get online and click around, I am forced to think about their digital data trail. When students Google or use collaborative document sharing, I wonder about how their data is tracked — and sold to advertisers.
More importantly, I reflect upon the best educational practices for teaching students about algorithms, tracking technologies, and algorithmic discrimination.
Because computer algorithms are exceedingly complex, I’m not necessarily inclined to teach students a literacy for algorithmic calculations.
Instead, I am more inclined to integrate activism in coursework, encouraging students to speak up and out about the legal and social affects of algorithmic discrimination to seek regulatory and legal protections.
However, this is only one model for integrating discussions about algorithms in classrooms. What models might work for you, within your own classroom/department/institution? What assignments and recommendations might offer students opportunity to learn about algorithms? How will your guidance help prepare students for the shifting online information economy?