Over the past few decades, online discussions have occasionally turned to the question of whether the tools that allow us easy access to a wealth of information (like Google) help or hurt our ability to reason (i.e., to be critical thinkers) and to learn. We are now seeing this same discussion about new artificial intelligence tools, like ChatGPT. Before reading on, take a moment and think about your own perspective on this: Do these tools help make you a stronger critical thinker or not?

To try and figure this out requires a clear definition of critical thinking, something that is frequently lacking in these discussions. Let’s use what Daniel Willingham, a cognitive scientist, says is his “commonsensical view.” His definition is that critical thinking is: 

  • Novel: Not a direct repetition of something you’ve learned before. 
  • Self-directed: Not just repeating steps you’ve been given. 
  • Effective: Following patterns that are likely to yield useful conclusions.

These elements are best thought about in the context of the tools we have available: As we use generative AI to create more writing and other content, it will be even more important to approach media with the right framework. Daniel Dennett, the philosopher and scientist, wrote about some mental habits to use (he called them “intuition pumps”). Occam’s razor is one that many people are familiar with—don’t rely on a complex explanation when a simpler one works just as well. He also talks about certain things to be on the lookout for, such as a “deepity”: a statement that sounds simple and profound but can be read multiple ways and which is actually quite pointless (Dennett gives the example “Love is just a word”). If you find yourself gasping and going “wow” after encountering one of these, take a moment to see if you can explain what interesting idea it actually revealed; you may not be able to!

 

Like any habit of mind, people can adopt effective general patterns with effortful practice. And domain-specific critical thinking skills can be supported through instruction. That is, if your goal is to teach someone how to, say, debug a program that isn’t working, there is evidence that it can be taught through direct instruction and practice applying it, getting feedback along the way. 

But this gets at another challenge. Frequently, what people mean by critical thinking is the application of approaches to new situations. You may think that someone who just received that training in debugging may be able to use the same underlying skills to help, say, revise an essay. However, there is a lot of psychology research that finds we are generally terrible at transferring critical thinking skills to new situations. We’ll dive into this limitation in people’s abilities to transfer their knowledge in another post, but, for now, we shouldn’t assume that for people using generative AI, being experts in their particular fields will protect them when evaluating false or misleading claims in areas outside their expertise.

So what does all of this mean for a person trying to keep up their critical thinking skills as the use of generative AI ramps up? Here are a few suggestions:

  1. Learn: Continue building your own body of knowledge and skills, even if it is seemingly something that a computer could do for you. That will give you the grounding to potentially make connections and form new ideas that go beyond what even ChatGPT can generate. 
  2. Evaluate: Stress the critical in the idea of “critical thinking.” It is well-known that generative AI can hallucinate, particularly when it comes to up-to-date research. Even outside interactions with a tool like ChatGPT, try to apply some healthy skepticism, whether to a news article, a YouTube video, an interesting newsletter, a corporate strategy document, or any other media. Look for additional sources for claims you see, particularly ones that seem too good to be true. 
  3. Reflect: After you work with an AI, do some reflecting. For example, if you are using ChatGPT to help you craft a persuasive message (like a marketing email or even a LinkedIn post), ask yourself how it went. Did it produce what you wanted? What elements seemed to align with your thinking and which didn’t? Making sure you stop to explore these questions in your own interactions will help make you a stronger critical thinker when dealing with the output of AI systems.