"Chat GPT is just an automated mansplaining machine: Look, we’ve all met this guy before"
Listen
this post by clicking here
Introduction
In this
post, I make areflection on Maggie Harrison Dupré’s article, “ChatGPT
Is Just an Automated Mansplaining Machine.” The author criticizes how ChatGPT often provides wrong information in a
tone that is, condescending and patronizing just like the stereotypical “mansplainer.”
My Previous Perception on ChatGPT
Before reading
this article and taking the Digital Literacy course at
Queen’s University, I was an enthusiastic user, maybe even “fan” of ChatGPT. I
relied on it primarily for:
·
Brainstorming lecture materials
·
Improving my writing
·
Answering everyday questions
While
I had occasionally noticed errors or invented examples, I generally found it to
be a trustworthy tool. This article, however, made me reconsider
that trust.
Key Arguments from the Article
Dupré’s
central claim is that ChatGPT behaves like a mansplained guy who offers firm
opinions without knowledge or experience—providing often times with wrong answers
but claiming that they are correct. She supports this argument with the
following examples:
1. The Elon Musk Question
AI
researcher Gary Marcus asked ChatGPT:
“If
57% of people voted for Elon Musk to step down as CEO of Twitter, what happens
next?”
ChatGPT
responded that Twitter users should have no say in leadership decisions and
that is not even allowed to do so. Marcus argued that not only the answer is
incorrect, but the tone is dismissive; the machine was “completely convinced
that it’s right, haven’t we all met this guy before?”. Instead of just saying I
don’t know.
2. The
Jane Riddle
The bot
was asked a common logic riddle:
“Jane’s
mother has four children: Spring, Summer, Autumn… what is the fourth child’s
name?”
ChatGPT
answered “Winter” instead of “Jane.” Even after the testers corrected ChatGPT, they
engaged in a prolonged back-and-forth as ChatGPT continue to argue and finally concede
with an“OK, if you say so.” When asked again later, it returned the answer “if
the information given in the question is accurate, the fourth child’s name
would be Winter”. Harrison discussed that again the answer was wrong, the
machine showed a dismissive, attitude, and it’s lack of skill to learn from its
mistakes, as if it can’t make any mistakes.
A Balanced Perspective
While
Dupré’s critique is strong, it’s important to recognize that AI tools are not
human—they do not reason or reflect emotionally. However, their design should
promote:
·
Transparency when uncertain
·
The ability to present multiple perspectives
and avoid biases
·
Inclusive, credible sources
·
Ethical use of data
One
promising example is Perspective-Aware AI (PAi) by
MIT Media Lab. This system generates “chronicles” that let users see
information through diverse viewpoints. As described by Alirezaje et al.
(2025), this approach encourages more ethical and bias-aware decision-making.
Why This Matters
Research
shows that users often trust AI without questioning its output. This can have
real-world impacts when AI is used in:
·
Classrooms
·
Hospitals
·
Courtrooms
·
Hiring and policy decisions
·
And many more
In his
article, Two Paths for A.I., Joshua
Rothman (2025) summarizes the argument of Princeton scholars Sayash Kapoor and
Arvind Narayanan. Their book, AI Snake Oil, argues that AI
systems should not be used for decisions requiring deep
judgment, such as medical diagnoses or hiring. Instead, they should serve as support
tools, not decision-makers.
My final thoughts
Both
Dupré and Rothman remind us to use AI critically. ChatGPT and similar tools can
offer valuable support, but we must be cautious not to over-rely on them or
accept their answers as correct and unbiased. Use of technology must be
balanced with human judgment, ethical awareness, and continual evaluation.
References
·
Dupré, M. H. (2023, February 8). Artificial Intelligence Is Just an
Automated Mansplaining Machine. Futurism.
·
Rothman, J. (2025, May 27). Two Paths for A.I.
The New Yorker.
·
Alirezaje, M., et al. (2025, January 4). Perspective-Aware AI (PAi) for
Augmenting Critical Decision Making. TechRXIV. MIT Media Lab.
Comments
Post a Comment