AI and Critical Thinking
What are the chances that one of the smartest things we've ever invented could actually make us dumber? Recent study suggests: pretty high!
First of all, what is critical thinking?
“Critical thinking is the process of analyzing available facts, evidence, observations, and arguments to make sound conclusions or informed choices. It involves recognizing underlying assumptions, providing justifications for ideas and actions, evaluating these justifications through comparisons with varying perspectives, and assessing their rationality and potential consequences. The goal of critical thinking is to form a judgment through the application of rational, skeptical, and unbiased analyses and evaluation.”
Source: Wikipedia
We can say that critical thinking is one's ability to question, analyze, interpret, evaluate and make judgements about what one reads, hears, says or writes.
A recent study by Microsoft found that higher confidence in GenAI is associated with less critical thinking, while higher self-confidence is linked to more critical thinking. Let that sink in for a moment. The more you rely on GenAI, the less you trust your own mind for critical thinking. On the other hand, the more you rely on your own mind, the more critically you will think.
Gen AI shifts critical thinking efforts into three distinct directions:
Knowledge and comprehension: shifting from information gathering to information verification.
Application: shifting from problem-solving to response integration.
Analysis, synthesis and evaluation: shifting from task execution to task stewardship.
In this scenario, the user becomes a task supervisor and integrator. AI generates bits and pieces, while the user verifies and assembles them.
GenAI tools reduce the effort required for critical thinking while encouraging over-reliance on these tools. As users depend on AI, they no longer see the need to engage in independent problem-solving. The shift from task execution to task oversight means trading active engagement for merely verifying and editing AI outputs.
One could argue that AI can be used to automate exception handling. However, over-reliance on AI seems to erode a worker's domain knowledge over time. A once knowledgeable worker may gradually become just an exception handler, trained only in managing anomalies rather than understanding the entire domain.
The same Microsoft study says:
"A key irony of automation is that by mechanizing routine tasks and leaving exception-handling to the human user, you deprive the user of the routine opportunities to practice their judgement and strengthen their cognitive musculature, leaving them atrophied and unprepared when the exceptions do arise."
GenAI can improve worker efficiency but may lead to long-term over-reliance on the tool and a diminished ability for independent problem-solving. Increased use of GenAI tools may gradually reduce the need for independent thinking. In the future, job interviews might no longer test problem-solving skills but rather evaluate how well a candidate can prompt AI, verify its output, and integrate it into a workflow. A worker skilled in this process may become more valuable than one demonstrating problem-solving abilities, as AI can handle problem-solving more efficiently, making human intervention unnecessary.
Over-reliance—defined as “users accepting incorrect recommendations”—is closely tied to a lack of critical thinking.
This over-reliance can lead to accepting poor or even incorrect results. If the AI output meets the minimal criteria for acceptance, it is integrated into the work as is. The result is both a lower-quality product and a gradual loss of the cognitive processes that would have produced a higher-quality outcome.
Take writing, for example. To become an experienced writer, one must write consistently. An experienced writer can use GenAI tools to boost productivity, whether for content generation or idea creation. Since their skills are already well-developed, they don’t risk losing much, and any minor loss can be traded for valuable time.
However, a novice writer who over-relies on AI risks stalling their skill development. This happens because AI enables them to bypass critical writing processes, such as constructing logical arguments or deepening their understanding of the subject matter.
Why is critical thinking important?
In the work context we might find that AI driven processes are more efficient. We've seen how the human in the loop tends to shift towards an integrator role, assembling AI generated bits. This delegation leads to the decline of several key skills: analysis, synthesis, problem-solving, memory, and ultimately, critical thinking.
Critical thinking enables individuals to ask questions, analyze concepts, provide evidence-based arguments, understand complex ideas, and seek the truth. In a world where young people (and not only) spend vast amounts of time on social media, influencing large groups through carefully crafted messages has never been easier. The democratization of internet access allows anyone to spread ideas—true or false—to anyone willing to receive them. Moreover, some issues are so complex that discerning the truth is a challenge in itself.
Today, we need critical thinking to protect ourselves from manipulation. It’s very easy to fall victim as we’re all affected unconsciously by confirmation bias. Messages that align with our existing beliefs resonate so strongly that we often stop questioning them.
The internet is not going to tell you the truth, it's only going to confirm what you already believe.
Let’s try an exercise by asking Google two questions and see what responses we get:
Why is milk bad for me?
Why is milk good for me?
Now, let’s say you already believe that milk is bad for you. The first answer will only reinforce this belief, because of the existing bias in your mind, making it less likely that you’ll continue researching the topic further.
(Replace "milk" with anything else—the results will be just as interesting)
With AI is even worse because it’s prone to hallucinations. Essentially it is so eager to provide an answer that it will simply make stuff up to satisfy you. What’s more concerning is that if you don’t carefully review the response, filtering it through your own judgment, you may not even realize how much of it—or which parts—are hallucinated. That’s why we must exercise caution whenever dealing with AI-generated responses.
GenAI is not an unquestionable source of truth. It’s easy to forget that these AI applications are ultimately human creations, reflecting our inconsistencies, biases, and flaws. As AI is designed to look and feel more human, it fosters an increased sense of trust. Without critical thinking, we risk accepting biased or outright false outputs from AI.
While I’ve used the internet and GenAI tools as examples, this principle applies to any piece of information, regardless of its source. Critical thinking is essential for building an internal model of the world that is not distorted by prejudice but instead reflects empirical reality. While some amount of subjectivity is what makes each of us see the world in an unique way, having a shared common perception of the world is what enables us to function as a society.
Things like: the water is wet, the light is bright, the night is dark, rocks are hard, up is not down, 2+2=4, and so on, are basic truths that we all share. These are simple, straight forward facts, easily discerned. But for more complex subjects, one needs to apply critical thinking to analyze and formulate judgements on them. This is what most of do on a daily basis. That is why delegating critical thinking to AI leads to mental atrophy, making us more susceptible to misinformation and manipulation.
We need critical thinking now more than ever. It is a power that stems free though and independence and it’s up to us to decide how much of it we are willing to sacrifice for the sake of convenience.
Sobering article, and very well said. Critical thinking is being threatened by over reliance.
I've always wondered -- it seems like the world is still reeling over the effects of social media, and the democratization of information. We mostly really haven't coped. And now AI is here.
I don't think the answer is avoiding AI entirely. I think the answer is learning to think critically even while leveraging AI -- which of course is a much harder thing to do.
This is so true. Critical thinkers are more self-confidence. The more we rely on AI, the less we trust your own capacity to think clearly.