Who Owns Language? ChatGPT: Biases, Blind Spots and Silicon Valley

Curator's Note

Obviously, no one owns language; we all are simply editors. Thus, the wild fear of ChatGPT seems to stem mainly from the inability to confidently grade essays about Classic Hollywood or some other well-worn topic. This fear seems more a reflection of outdated essay prompts assigned in large lecture classes, ones that students don’t feel much connection to, and the widespread adoption of Turnitin.com bears this out.

However, ChatGPT, like all AI, has both biases[1] and blind spots. Recent studies find that it is biased against nonnative speakers of English. A simple refinement of the prompt, one that asks the app to rewrite the text as a native speaker, remedies this. It is this aspect of the tool—the reframing of a text—that shows real promise. For example, one of my students is neurodivergent and she frequently uses ChatGPT to rephrase her emails, to make ‘sense’ of her disparate ideas, and to help with coding.

The video that accompanies my post is a basic tutorial for creating an account with Open AI, the company that developed this AI-infused chatbot. Two things bear mention: first, the service is currently free but will not remain free, although the price is ‘an open question’. Secondly, the video shows the less well-known interface, the ‘playground’ where one can customize features and choose among the available training sets, davinci-003 being the newest and currently the default. Calling attention to training sets is hugely important, if only to remind us that this tool does not capture all online content and sometimes cannot find information even when given a lot of context, even when a simple Google search renders results.

Recently, I was demonstrating ChatGPT and it could not find a colleague, a filmmaker with an online presence, despite numerous prompt shifts and the inclusion of keywords—USC, School of Cinematic Arts where all faculty have bios. In other such exercises, it is simply wrong. When querying my own bio for instance, the results seem spookily accurate naming research interests, but there are always gaping factual errors: crediting me as authoring books I’ve never heard of, granting me degrees from institutions I never attended.

These new tools are powerfully convincing in their ‘intelligence’ and I worry that if those of us in the arts and humanities fail to engage both critically and realistically with ChatGPT and its ilk, especially while it remains free and accessible, then the tech companies may eventually come to own language. 

 

[1] See Weixin Liang et al.,‘GPT Detectors Are Biased against Non-Native English Writers,’ arXiv:2304.02819. https://doi.org/10.48550/arXiv.2304.02819   

Add new comment

Log in or register to add a comment.