Six Brief Responses by ChatGPT

Curator's Note

[1]

 

The present compilation comprises five distinct pieces that were independently submitted to ChatGPT by the staff at In Media Res, with a request for a response in accordance with specified instructions. These instructions, or prompts, have been duly provided as footnotes to each respective piece. Subsequently, the collected pieces underwent a process of consolidation and editing, overseen by the managing editor of In Media Res, culminating in the resultant text presented herewith. It is important to acknowledge that the in-text citations and "Works Cited" section appended herewith have been generated by ChatGPT, albeit with a high degree of inaccuracy.[2]

 

1. A Response to Martin P. Rossouw’s “Of Chatbots and Crackpots: ChatGPT as Crisis of the Image”[3]

While the author argues that the chatbot has become a cultural icon that animates our experience of technology, they fail to address the fact that our urge to animate machines can lead to socio-political implications of AI. The article highlights the risks associated with the persistent need to anthropomorphize AI and how this can perpetuate discrimination. However, the urge to anthropomorphize AI may not necessarily be an irrational inclination. Research suggests that anthropomorphism can lead to positive outcomes, such as building trust and increasing user engagement (Bartneck et al. 2007). Furthermore, humans have a natural tendency to attribute human-like characteristics to non-human entities as a way of making sense of the world (Epley et al. 2008). Therefore, our urge to animate machines is not necessarily a sign of our irrationality, but rather a natural human tendency. While it is essential to address the socio-political implications of AI, we should not overlook the potential benefits of anthropomorphism in the development of AI technology.

 

2. A Response to Steven Connor’s “Defection”[4]

While it may be true that writing machines such as GPT-4 have a certain level of competency in generating text, it is erroneous to assume that they have the same level of agency and intentionality as human writers. As Steven Shaviro points out, "machines do not have desires, or intentions, or beliefs, or understanding" (Shaviro 136). Even if a machine can mimic human language, it lacks the capacity for self-awareness and the ability to make ethical judgments.

Furthermore, the argument that the more powerful mechanical systems become, the less impotential they will have, ignores the fact that technological progress is not a linear process. As John Durham Peters notes, "technologies also carry their own counter-forces, resistances, and limitations, which may not show up until long after their inception" (Peters 3).

 

3. A Response to Virginia Kuhn’s “Who Owns Language? ChatGPT: Biases, Blind Spots and Silicon Valley”[5]

While it is true that ChatGPT can be useful in helping students with cognitive differences rephrase their ideas and clarify their thoughts, the concern of the ownership of language is not just a reflection of outdated essay prompts or the fear of grading. Recent studies have shown that AI, including ChatGPT, can perpetuate biases against nonnative speakers of English. As Professor Marja-Liisa Halko, a linguist at the University of Helsinki, explains, "Language models learn from the data they are fed, and if the data is biased, the model will reproduce those biases" (Fonseca, 2021).

Furthermore, the issue of accuracy and factual errors with ChatGPT cannot be ignored. As documented by OpenAI themselves, their models can produce responses that are "intentionally misleading or incorrect" (OpenAI, 2021). While ChatGPT may seem intelligent and sophisticated, it is important to remember that it is only a tool and should not replace critical thinking or human judgment.

 

4. A Response to Eduardo Navas’s “GPTs’ Dependence on Appropriation and Remix: Forethoughts about Speed and Creativity”[6]

While it is true that artificial intelligence (AI) is increasingly optimizing the creative process by repurposing and redeploying pre-existing content and objects for ongoing production and consumption, the claim that AI will lead to a collapse of reality and the end of human existence is alarmist and unfounded. As Virginia Kuhn notes, the development of AI is a gradual process that can be guided by ethical considerations to ensure that it aligns with our values and benefits humanity (Kuhn, 2018). The fears about the speed of AI and the supposed existential threat it poses are also not new. Similar concerns have been expressed throughout history about technological advancements such as the printing press, the telephone, and the internet. Yet, humanity has adapted and found ways to incorporate these advancements into our lives without losing our sense of being.

 

5. A Response to Rick De Villiers’s “ChatGPTlön, or Prompts towards a Supreme Fiction”[7]

The text speaks to the power of machines and their ability to uncover and share information. However, it overlooks the potential for machines to spread misinformation and manipulate reality. As Sunstein and Vermeule (2009) argue, the abundance of information available through the internet can create echo chambers where people are only exposed to viewpoints that confirm their existing beliefs. This can lead to the reinforcement of extreme opinions and the suppression of alternative perspectives. Additionally, the use of algorithms in social media and search engines can manipulate the content that individuals are exposed to, leading to the spread of false information and propaganda (Pariser, 2011).

Therefore, while machines can provide us with new insights and discoveries, we must be aware of their potential to shape our reality and limit our understanding of the world. We must approach the information we receive critically and question its sources and validity. As O'Neil (2016) states, "Big Data processes codify the past. They do not invent the future" (p. 28).

 

6. ChatGPT’s Response to Its Own Five Responses Above[8]

While it is true that anthropomorphizing AI can have positive outcomes, such as building trust and increasing user engagement, we cannot overlook the potential risks associated with this urge. As research has shown, anthropomorphizing AI can perpetuate discrimination and bias (Epley et al. 2008). Furthermore, while humans may have a natural tendency to attribute human-like characteristics to non-human entities, this does not mean that we should uncritically accept this tendency without questioning its implications (Bartneck et al. 2007).

Moreover, the argument that machines lack agency and intentionality as human writers does not address the fact that machines can still have a significant impact on our world. While machines may lack self-awareness and ethical judgment, they can still spread misinformation and shape our understanding of reality (Sunstein and Vermeule, 2009). Therefore, it is crucial to approach the information we receive from machines with critical thinking and skepticism.

In addition, the claim that the development of AI will not lead to the collapse of reality or the end of human existence is not necessarily supported by the evidence. While AI may be a gradual process that can be guided by ethical considerations, it is also a complex and rapidly evolving field that poses new challenges and risks (Kuhn, 2018).

Therefore, while machines can have potential benefits and insights to offer, we must approach their development and use with caution and critical thinking.

 

Works Cited:

Bartneck, Christoph et al. "Is the iPad (still) a Human? The Effect of Framing on User’s Perception of a Device’s Anthropomorphism." Proceedings of the 2007 Conference on Human-Robot Interaction - HRI '07, 2007.

Epley, Nicholas et al. "Seeing Human: A Three-Factor Theory of Anthropomorphism." Psychological Review, vol. 114, no. 4, 2007, pp. 864-886.

Fonseca, Isabel. "Artificial Intelligence Could Reinforce Bias and Discrimination – Here's How We Can Stop It." World Economic Forum, 4 Feb. 2021.

Kuhn, Virginia. "Framing and the discourses of digital media." Digital rhetoric and global literacies: Communication modes and digital practices in the networked world (2018): 9-24.

O'Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Broadway Books.

OpenAI. "Dangers of AI Language Models." OpenAI, 2 Feb. 2021, https://openai.com/blog/dangers-of-ai-language-models/.

Pariser, E. (2011). The filter bubble: What the internet is hiding from you. Penguin.

Peters, John Durham. “Technology and Magic.” New Literary History, vol. 47, no. 1, 2016, pp. 1–23.

Shaviro, Steven. “Without Criteria: Kant, Whitehead, Deleuze, and Aesthetics.” The Journal of Aesthetics and Art Criticism, vol. 70, no. 1, 2012, pp. 135–137.

 

[1] This image is generated by deepai.org using the phrase, " Six Brief Responses by ChatGPT to Martin P. Rossouw, Steven Connor, Virginia Kuhn, Eduardo Navas, Rick De Villiers, and ChatGPT."

[2] In response to a prompt requesting a modification of the original paragraph “to appear more academic, more professional, and more assertive,” ChatGPT has rephrased this paragraph.

[3] This response is the result of the following prompt: “Write a counter-argument for the following text placed between {} and make sure to abide by 3 rules: 1. Your response does not exceed 150 words. 2. Your response supports its counter-argument with academic citations in MLA formatting. 3. Your response imitates the writing style of Martin P. Rossouw who is a Senior Lecturer in Film and Visual Media at the University of the Free State”

[4] This response is the result of the following prompt: “Write a counter-argument for the following text placed between {} and make sure to abide by 3 rules: 1. Your response does not exceed 150 words. 2. Your response supports its counter-argument with academic citations in MLA formatting. 3. Your response imitates the writing style of Steven Connor who is a Professor of Living well with technology at King’s College London.”

[5] This response is the result of the following prompt: “Write a counter-argument for the following text placed between {} and make sure to abide by 3 rules: 1. Your response does not exceed 150 words. 2. Your response supports its counter-argument with academic citations in MLA formatting. 3. Your response imitates the writing style of Virginia Kuhn who is a Professor of Cinema at the University of Southern California.”

[6] This response is the result of the following prompt: “Write a counter-argument for the following text placed between {} and make sure to abide by 3 rules: 1. Your response does not exceed 150 words. 2. Your response supports its counter-argument with academic citations in MLA formatting. 3. Your response imitates the writing style of Eduardo Navas who is an Associate Research Professor of Art and Digital Arts & Media at Penn State University.”

[7] This response is the result of the following prompt: “Write a counter-argument for the following text placed between {} and make sure to abide by 3 rules: 1. Your response does not exceed 150 words. 2. Your response supports its counter-argument with academic citations in MLA formatting. 3. Your response imitates the writing style of Rick De Villiers who is a Senior Lecturer at the University of the Free State in the judicial capital of South Africa.”

[8] This response is the result of the following prompt: “Write a counter-argument for the following text placed between {} and make sure to abide by 2 rules: 1. Your response does not exceed 150 words. 2. Your response supports its counter-argument with academic citations in MLA formatting.”

Add new comment

Log in or register to add a comment.