Last week, over coffee with a colleague, I found myself in yet another spirited debate about AI’s impact on academic thinking. “AI is making students lazy,” he declared. “They don’t even try to think critically anymore.” My response was, “Or maybe they’re just thinking differently—perhaps even smarter?”
This conversation prompted me to do some digging. In a recent article, Larson et al. (2024) argue that AI can both enhance and erode critical thinking in research, depending on how it’s used.
That’s the short answer, but it’s a bit more complicated than that. So, if you have five more minutes to spare, stay with me on this.
How AI is Changing the Way We Think
In my last blog post, I indicated that AI is not only affecting research practices but is deeply reshaping the very core of academia – original thought. AI isn’t just speeding up research—it’s reshaping how we think. Whether we realize it or not, AI tools influence the way we engage with information, process arguments, and synthesize knowledge. But are they helping us become better thinkers, or are they making us intellectually complacent?
Larson et al. (2024) identify two key dimensions of critical thinking:
- Individual Critical Thinking – how we personally analyze and evaluate information
- Social Critical Thinking – how we challenge dominant narratives and question ingrained social norms
AI is actively reshaping both dimensions.
Individual Critical Thinking
AI tools like Elicit and Consensus can scan thousands of papers in seconds, summarizing key themes and arguments. On the surface, this is a boon for researchers—less time spent searching means more time for deeper analysis. But there’s a catch: when AI pre-processes information for us, we risk losing the habit of “wrestling” with ideas ourselves.
Think about how we approached research before AI. We’d spend hours reading, grappling with dense theoretical arguments, and questioning each study’s methodology. This mental effort strengthened our analytical skills. Now, if we passively accept AI-generated summaries, we may outsource our thinking to the machine.
But the flip side is also true. Critical thinking isn’t just about analyzing information—it’s also about challenging our own preconceptions when confronted with new evidence. In other words, our ability to think critically is limited by the information available to us. Consequently, less information means fewer opportunities for critical thinking, while greater access to information provides a richer ground for deeper inquiry. AI, by making information more accessible, gives us a unique opportunity to engage in more rigorous critical thinking.
Social Critical Thinking
Social critical thinking is the ability to question dominant narratives, challenge ingrained social norms, and push beyond what is widely accepted as truth. It’s what allows us to recognize and confront systemic issues like racism, gender discrimination, and other forms of inequality by re-evaluating knowledge through a more critical and inclusive lens.
But what happens when the source of knowledge itself fails to evolve and is continuously replicated by automated systems? Well, your guess is as good as mine. We replicate the same knowledge over and over again. This means that AI can inherently stifle social critical thinking by reinforcing existing biases and making it harder to introduce new evidence or challenge dominant ideologies.
Addressing this challenge requires two key steps:
- We need to rethink how AI is used in research and education
- We must reconsider how AI models are trained, ensuring that they are designed to incorporate diverse perspectives rather than simply reinforcing the status quo
On the other hand, AI has the potential to enhance social critical thinking by making information about different social systems easily accessible to researchers worldwide, leading to shifts in social norms. For example, in recent years, increasing numbers of countries have recognized LGBTQ+ rights due to exposure to global perspectives and comparative legal frameworks. The same pattern applies to other major social issues, such as gender equality, climate justice, and workers’ rights, where access to diverse viewpoints can help reshape outdated norms.
The Risks
While AI offers powerful tools for enhancing critical thinking, my recent research and conversations with colleagues have revealed three significant risks that deserve careful attention. These aren’t just theoretical concerns—they’re challenges I’ve witnessed firsthand in academic settings, affecting both how we think and how we teach.
The AI Confidence Trap
In my graduate seminar, I noticed a telling pattern. When students cited research papers directly, they approached the findings with healthy skepticism. But when they used AI to summarize those same papers, that critical lens often disappeared. Because AI mimics human reasoning so convincingly, its answers sound authoritative—even if they’re incomplete or incorrect.
Over the past semester, I’ve documented multiple instances where even experienced academics accepted AI-generated content without the usual scrutiny. This isn’t just about AI hallucinations; it’s about how AI’s “authoritative tone” can lull us into overconfidence, bypassing our critical instincts.
The Bias Amplification Effect
In my last post, I shared about the experiment I conducted last month. I tested a popular AI tool on a dataset about how people weigh different sources of knowledge—specifically comparing Western scientific knowledge with Indigenous or experiential knowledge. Unsurprisingly, it consistently undervalued Indigenous and experiential knowledge as lacking “rigour” or “scientific” backing, mirroring the biases in its training data.
This example isn’t unique. AI tools, by mirroring historical biases present in their training data, run the risk of perpetuating and even amplifying these biases on an unprecedented scale, leaving us with an expanded echo chamber of dominant perspectives while alternative viewpoints become marginalized.
The Mastery Mirage
A third major risk is what I call the illusion of mastery. We’ve all been there—reading an AI-generated summary and thinking, “I’ve got this topic covered.” But do we? AI-generated insights can create a false sense of expertise, making it tempting to skip deep engagement with research. This illusion of mastery is particularly dangerous in academia, where true expertise comes from wrestling with complex ideas, not just skimming polished summaries.
Using AI to Enhance—Not Replace—Critical Thinking
AI isn’t inherently bad for critical thinking; it all comes down to how we use it. Here are four ways researchers and educators can ensure AI serves as a catalyst for deeper engagement:
1. Require Justification for AI-Generated Insights
Instead of simply accepting AI summaries, researchers should cross-check sources, verify claims, and critically assess AI outputs. Tools like Scite can help by showing how different studies support or contradict each other, fostering a more analytical approach.
A good practice is to treat AI-generated summaries the same way we treat secondary sources: as a starting point, not the final word. Before integrating AI-generated insights into research, we should be asking—how was this information generated? Is it complete? What perspectives might be missing?
2. Encourage Debate with AI Outputs
AI should be treated like a peer reviewer, not an authority. I’ve started asking my students to argue against AI-generated responses, forcing them to identify gaps and contradictions. This transforms AI from a passive knowledge provider into an active thinking partner.
For example, if AI suggests a conclusion based on a dataset, challenge it by seeking counterexamples, running alternative analyses, or testing assumptions. Jenni AI or LEX AI have built-in prompts to generate counterarguments. Beyond these, framing questions from counterfactual perspectives can be valuable. Rather than simply seeking verification, try asking from an opposing viewpoint. For instance, request the AI to critique or express disappointment in a perspective. This often prompts the AI to provide evidence and reasoning that challenges extreme positions. This practice strengthens analytical reasoning and helps students develop a more nuanced understanding of their fields.
3. Use AI to Surface Contradictions
Rather than using AI for simple fact retrieval, researchers can leverage tools like Elicit and Petal AI to analyze multiple perspectives on a topic. This forces scholars to engage with conflicting viewpoints rather than accept a single AI-generated response.
Imagine researching the impact of urban development on biodiversity. AI tools might pull together studies supporting both conservation and pro-development perspectives. Instead of settling for one dominant argument, scholars can use these contradictions to deepen their analysis, asking—what factors drive these differences? How does methodology influence conclusions?
4. Teach AI Literacy in Academia
I’ve started including a section on AI literacy in my courses—helping students learn how AI works, where it can go wrong, and how to critically evaluate its outputs. This small shift has led to far richer discussions about knowledge production and bias in academia.
Universities must do the same. Instead of banning AI, institutions should focus on educating students on AI ethics, biases, and limitations. The goal should be to equip scholars with the skills to interrogate AI-generated knowledge, rather than passively consume it.
Final Thoughts
The question isn’t whether AI will transform academic thinking—it already has. The real challenge is ensuring that AI elevates, rather than diminishes, our capacity for critical inquiry. By recognizing the potential pitfalls (the AI confidence trap, bias amplification, and the mastery mirage) and actively working to mitigate them, we can harness AI as a powerful ally in the pursuit of deeper, more inclusive scholarship.
Thank you for reading! If you have any thoughts, experiences, or research related to AI’s impact on critical thinking, feel free to share in the comments below. Let’s keep the conversation going.