When a Machine Curates: Algorithmic Rhetoric, Agency, and Authorial Control

Photographer Max Marshall discusses the changing nature of authorship and ownership in a networked world where others copy, paste, change, link, attribute, or misattribute, someone else’s work. Rather than “wasting time” seeking out those who misattribute or don’t give credit, Marshall instead suggests fostering and extending a community ethos of sharing, acknowledging others, and trust. With the loss of one’s individual authority, one gains serendipitous juxtapositions and interesting pingbacks created by the collective curatorship of the blogosphere, with the ultimate result of more people experiencing one’s art.

A community of faculty and staff working together at a college or university using a networked data tracking system that repeats, recontextualizes, and reinscribes content generated by its respective users can be compared to Marshall’s notion of allowing one’s content to end up in interesting, unforeseen places.  In response to demands from accrediting bodies, boards, and political funding bodies who desire data to “prove” the effectiveness of education and the value of a degree, these new student tracking systems are being implemented at colleges and universities to measure the institution's attempts to ensure a student's success. These systems allow/require faculty to raise “flags” chosen from a prescriptive list of common academic difficulties and to narrate details of specific concerns about the student. The flag triggers an institutional response in the form of multiple communications with the student, his/her advisor, the counseling center, success coaches, and the office of academic research. The flag and its contents are tracked by the system, which accumulates data about the student, the faculty member raising the flag, and the responses taken by others who intervene to assist the student.

Once a flag is raised, the automated system sends an email to the student – attributed to and ostensibly from the faculty member who raised the flag – without human intervention. That is, the system is designed to immediately reblog the information submitted by the faculty member by copying it and pasting it into a new context: that of the canned concerned email to the student.  At the very bottom of this email is the text that the faculty member added to the flag. The bulk of the email, however, is generic text procedurally written by the system from the perspective and point of view of the faculty member. The program uses algorithmic logic to construct the email from a bank of phrases as well as specific student-related information pulled from the flag, such as the student’s name, ID number, and course name. For example, the email to the student states, "I want you to be successful in every course you take" and "I am very concerned about your success in (Insert course name here).” In this case, the reposting by the software has attributed words to someone who did not author them, and distributed the message to a variety of audiences. Faculty members often remain unaware that their comments about the student have been repurposed into an email and archived in the system in this format. All this reposting of data creates an archived virtual identity of the student as “someone who struggles.” While the college’s motivation for using such a system was partially based on caring and concern (as well as accountability demands by accrediting bodies), the reposting and archiving of confidential student information (such as their reasons for missing class or their likelihood of failure) creates a traceable, identifiable, and potentially public reinscription of the student.

Unlike Marshall’s community of trust and acknowledgement, there is no human curator who makes aesthetic judgments about the value of the content or its applicability in a new context. The machine does not contemplate or judge the merits of the flag or comments and subsequently determine an appropriate action to take, if any. It is an automatic switch that takes predictable, unconsidered actions that apply to all inputs of a specific type. Professor A remarkably says the same words as Professor B in the machine-driven text sent to the student. Advisors and counselors receive emails about students they have never met – emails with personally identifying information and narrated details offered by a student to one person, now shared with all others tagged by the system.  While the tracking system itself is protected by a secured sign-on, these emails generated by the program are one click away from being forwarded to an outside party, thereby opening the professor and student to additional audiences and inscriptions. Clearly one must trust one’s colleagues to hope that such violations of FERPA  do not occur, but the fact that actions are taken by the uncritical software without the knowledge of student or professor raises concerns about the viability of a community of trust when governed by algorithmic rhetoric programmed by an external corporation who profits from the program’s installation and use.

Comments

I love the idea of talking about early alert/retention programs as a form of reblogging. First, the fact that many of those systems even have formulaic input options as well (aka, a faculty can only "alert" based on pre-set options). Is that the initial post? Or is the system's offering of that option the initial post?

Second, the idea that these are used to help with retention when there is TONS of scholarship that discusses that it is a student's connections/relationships to the school and individuals at the school that help with retention. Our students know these are machine generated messages; these impersonal emails do not foster relationships or connections. At least many reblogged materials get the little 1-3 phrases/sentences of context of what/how/why the reblogger shared the content. That personalized touch shows the person behind/vetting the replication. And unless a faculty member does use that space to provide personalized feedback (which would still be funky w/in the context/tone of the machine generated email environment) the message will probably fail its purpose. And if I'm taking the time to write that contextualizing message, why not write the email myself (unless, of course, the institution requires the early alert process...a whole other issue). 

Interesting question! I never thought about the offering of the options to the faculty members as the initial post. I think I see that as more of the "action potential," the opportunity for a communication chain to begin, much like in neuroscience when the nerve impulse starts off cell to cell communication. To me, the canned statements in the software represent constraints to the procedural rhetoric that restrict user agency, and, I would argue, rhetorical effectiveness.

I agree with you that students know these are canned responses and that ultimately they undermine meaningful relationships. We are not to that point yet, since the system is new, and not very many students have received multiple messages so that they know they are repetitive. When I was in K-12, we implemented a system for comments on report cards. Again, there were canned options to choose from, and again, faculty were required to leave comments each quarter for students. It became a joke for the students: "Hey, John, which one did you get? Shows improvement?" and "Hey, Mrs. Brown, all I got was 'Works hard?' When will I get 'A joy to teach?'" It is a gross underestimation of students' rhetorical savviness to assume that they will believe these efforts are ingenuous on the part of the institution. But then, that isn't quite the point of the system is it? The activity generated is an end of itself. This posting/reposting gains credibility as it travels, as proof not of its authenticity or its value, but as an aggregation of data. These data aren't a measure of influence or interest, like a viral blog post, but instead a measure of rote, machine-driven actions attempting to masquerade as them.

This post touches on the question of whether machines can act as rhetors.

Once a flag is raised, the automated system sends an email to the student – attributed to and ostensibly from the faculty member who raised the flag – without human intervention.

I attended a panel session earlier this week titled Robots Everywhere! Is It Good For Us? The panelists were scholars and artists researching and exploring ways that humans and robots can and will interact. One researcher, PhD student Heather Knight (a.k.a. Marilyn Monrobot) at The Robotics Institute at Carnegie Mellon, is working to help robots learn human social behaviors and interact verbally and mechanically in humorous, socially-appropriate ways. Specifically, Knight is using comedy and acting to help robots learn social cues and program appropriate responses.

When her robot told a joke during the panel, we in the audience laughed. The robot was not able to respond to the audience with a related joke; such a response would require an understanding of the complex social situation and a way of acting appropriately. Knight is working to make such reactions a reality — she’s teaching the robot “charisma.” Until that learning occurs, however, surely we would do better to remove automated messaging that is intended to represent a warm, caring response — a response that neither complex robots nor simple machines is capable of providing — to a student in crisis. Beyond the issue of authorship is the obvious message students receive, as noted by Rodrigo above: I am important enough to warrant an automatically generated email message from my school.

That’s hardly meaningful intervention, nor is it rhetorical. It’s surely among the worst possible kinds of reblogging in use.

I agree with your sentiments that AI cannot respond with warmth, compassion, or care, only a cheap semblance of it. However, I would argue that these responses are rhetorical. The system uses a procedural and algorithmic rhetoric that has a purpose, an audience, and a subject. Unfortunately, these are only tangentially related to the student in crisis. The student is the exigence that drives the rhetoric. The purpose is to prove the institution made attempts to assist the student, to justify other data points regarding retention and degree completion to the audience: policymakers, boards, politicians.

If these automated responses mixed with faculty comments are rhetorical, what is the ethos of the procedural and algorithmic rhetoric? This comment suggests it’s in part an ethos of compliance, even servility, to the demands of policymakers and politicians. When students receive these email messages, what do they perceive the ethos of the rhetoric to be? And when policymakers and politicians receive reports on the effectiveness of such interventions, what will they perceive the ethos of the interventions to be? Will each group recognize the ethos of compliance? Will they perceive an ethos of care for student well-being?

Add new comment

Log in or register to add a comment.