I want to begin with this: using algorithms to evaluate writing is nothing new. A holistic scoring rubric and procedure for, say, calibrating human raters of essays…that’s an algorithmic approach. The algorithm is the procedure, after all, and the motive to use it lies in the goal of systematizing something that is typically a complex interpretive task. Whether we do it with humans or hand over the interpretive task to machines, we do so at the cost of much of the nuance that an individual reader might bring to the task.
Thankfully, Les Perelman at MIT has been working hard to demonstrate that teaching students to write for an algorithmic scoring process is a bad idea. And as Perelman recently said in an interview, it is a bad idea to let computers execute algorithms for holistic evaluation for a straightforward reason: it doesn’t work. You can ask students to produce text that satisfies the rules of the algorithm and is still bad writing. With human readers, the problem can be overcome, but only if the readers are free to go beyond what the algorithm specifies.
But there is another reason to be skeptical of algorithmic approaches that may be, in the long run, even more compelling. They shift the focus away from giving students formative feedback. They teach humans to behave more like computers. MIT Media Lab director Joi Ito puts it bluntly:
“The paradox is that at the same time we've developed machines that behave more and more like humans, we've developed educational systems that push children to think like computers and behave like robots.”
But Ito is not anti-technology. And neither am I. We can imagine a different sort of role for technology to play in teaching and learning if we work at it. Rather than trying to create machines that read (or write) like humans, we can instead create systems that give humans a chance to focus more on how we might improve as writers and communicators.
What would these look like?
At the Writing, Information, and Digital Experience (WIDE) research center, we’ve been working on assistive technologies that can improve written communication. We’ve used a number of algorithmic approaches to do this. In our peer learning service Eli Review, for instance, we track activity – feedback that reviewers and instructors give to writers, as well what writers do with that feedback – to evaluate the “helpfulness” of a review. We do this as a way to help students learn to become better reviewers, something which itself has been shown to improve writing performance.
Another WIDE project uses a machine learning algorithm to help online discussion facilitators visualize and moderate comments in online forums. We worked with science educators at theMuseum of Life & Science in Durham, NC to develop an application called The Faciloscopewhich gives the museum staff a way to see when contributors to a thread are interacting in ways that are likely to be productive, and when they are not. The facilitator can then choose to intervene, perhaps by asking a question to get the discussion back on track, or perhaps by banning participants engaged in unwanted or unacceptable behavior. The Faciloscope doesn’t automate any of the work, but it does help facilitators who may have many simultaneous threads to attend to know when one could use their attention.
So, how will near future writing technologies influence teaching and learning in writing? It is up to us. Algorithms can have a positive role to play, Joi Ito eloquently argues, in making our world more human. This will only happen, though, if we play an active role in the design of algorithmic tools and focus their use on assistive applications rather than seeking to replace human thinking.