Midjourney/Every illustration.

Think First, AI Second

Three principles for keeping your cognitive edge while leveraging AI's capabilities

44 5

We’ve all had that moment: being unable to recall where an acquaintance works or a restaurant name, but knowing exactly where the information sits on LinkedIn or Google Maps. AI is similarly reshaping our cognition—only faster. Economist Ines Lee, who spent years teaching Oxford and Cambridge students to think independently, discovered her own dependency when ChatGPT went down one afternoon and she couldn’t articulate the ideas she needed. She argues that the solution isn’t using less AI, but using it differently. Drawing on MIT neuroscience research, Ines shares three practical principles for staying cognitively engaged while leveraging AI’s capabilities. Read on to learn how to think with AI rather than letting it think for you.Kate Lee

Was this newsletter forwarded to you? Sign up to get it in your inbox.


When ChatGPT went down one afternoon while I was preparing a presentation, I opened my document and my fingers froze. I couldn’t articulate why the frameworks connected to the examples I’d planned to use. My explanations lived in chat history I could no longer access.

As an economics lecturer, I’d spent years teaching students at Oxford and Cambridge to think independently, question assumptions, and apply frameworks to new situations rather than memorize them. I was apparently losing that skill myself—and I wasn’t alone. Colleagues across knowledge work described the same creeping inability to start meaningful projects without first consulting AI.

This past June, MIT researchers published findings that seemed to explain what we’re experiencing. They scanned the brains of 54 students writing essays under three conditions: using only ChatGPT, using only Google, or using just their own thinking.

The results seemed damning. The ChatGPT group showed the lowest neural activity, and 83 percent couldn’t remember what they’d written, compared to just 11 percent in the other groups. “Is ChatGPT making us stupid?,” the headlines asked.

But buried in the study was a finding most coverage missed. The researchers also tested what happens when you sequence your AI use differently. Some participants thought first, then used AI (brain → AI). Others used AI first, then switched to thinking (AI → brain).

The brain → AI group showed better attention, planning, and memory even while using AI. Remarkably, their cognitive engagement stayed as high as students who never used AI. The researchers suggest this increased engagement came from integrating AI’s suggestions with the internal framework they’d already built through independent thinking.

Meanwhile, students who started with AI stayed mentally checked out, even after they switched to working on their own. Starting passive meant staying passive.

The study has limitations—a small sample, an artificial task, not yet peer reviewed—but the pattern matched what I’d seen in my classroom and in my own work.This isn’t the first time we’ve seen technology reshape cognition. A 2011 study found that when people knew they could Google information later, they remembered where to find it but not the information itself. A 2020 study shows frequent users of GPS navigation systems develop weaker spatial memory and struggle to navigate without directions. AI follows the same pattern—with higher stakes.

The question isn’t whether to use AI. It’s how to use it without losing the cognitive capabilities that make us valuable: the ability to defend our reasoning, adapt our thinking to new contexts, and understand where our approaches might fail.

The MIT study offers a clue: Sequence matters. What follows are three principles I’ve developed for using AI in ways that challenge assumptions, expose blind spots, and force you to explain your reasoning rather than letting it do all the thinking for you.

But first, we need to understand the fundamental distinction that makes these principles work: the difference between passive consumption and active collaboration.

How to think with AI: Active versus passive use

Think about two ways to learn a piece of music. You can learn it by rote—like a kid memorizing the hand positions for Beethoven’s “Für Elise,” training your fingers through repetition until you can perform the piece flawlessly. Or you can learn the piece by understanding its structure, the chord progressions, the harmonic logic. You still practice until your fingers know the patterns, but you understand why the music works. Now you can transpose it, improvise variations, and explain why certain changes would or wouldn’t work.

This same pattern appears in programming: Developers who plan their approach before asking AI to generate code maintain a better understanding of their systems than those who start with prompts.

But the stakes are higher than individual productivity. Research shows critical thinking abilities are declining, especially among younger workers—precisely as employers increasingly demand these skills. The capabilities becoming scarcer are the ones organizations need most: the ability to defend reasoning, adapt thinking to new contexts, and understand where approaches might fail.

Passive AI use is like learning music by rote. You can produce output—an essay, a strategy document, an analysis—by following what AI generates. But you don’t always understand why the argument works, what assumptions it makes, or where it might fail. If somebody asks you to adapt it to a different context, you might be stuck. If you have to defend the reasoning, you have no answer. The output lives in your chat history, not your understanding.

Here’s an example: “Write me a strategy for improving team communication.”

You get an answer. You might even implement it. But you haven’t wrestled with what “better communication” means for your team, what’s causing the current problems, or why certain solutions might fail in your context.

Active AI use means building understanding while collaborating with the model. You frame the problem yourself, make an initial pass, then use AI to challenge your assumptions, uncover blind spots, and sharpen your arguments. You’re learning the chord progressions, not just memorizing the key presses. The machine assists; you own the reasoning.

This might look like: “Here are our context, goals, and constraints. I’ve listed three hypotheses and current evidence. Challenge my assumptions and ask for the missing data before proposing a plan.”

You’re still getting AI’s help, but you’ve done enough thinking that you can evaluate whether its challenges are valid, its questions reveal real gaps, and its suggestions fit your situation. You understand why the strategy works, so you can adapt it when circumstances change.

Of course, passive AI use has its place: transcribing text from screenshots, generating routine reports from data, creating multiple versions of the same message for different audiences. These are like scales and technical exercises—mechanical tasks that don’t require deep comprehension.

But for work where you care about judgment, learning, and deep understanding, you need to build your own understanding.

Create a free account to continue reading

The Only Subscription
You Need to Stay at the
Edge of AI

The essential toolkit for those shaping the future

"This might be the best value you
can get from an AI subscription."

- Jay S.

Mail Every Content
AI&I Podcast AI&I Podcast
Monologue Monologue
Cora Cora
Sparkle Sparkle
Spiral Spiral

Join 100,000+ leaders, builders, and innovators

Community members

Already have an account? Sign in

What is included in a subscription?

Daily insights from AI pioneers + early access to powerful AI tools

Pencil Front-row access to the future of AI
Check In-depth reviews of new models on release day
Check Playbooks and guides for putting AI to work
Check Prompts and use cases for builders

Comments

You need to login before you can comment.
Don't have an account? Sign up!
Cem Vogt 4 days ago

The real insight here is that “Think First, AI Second” isn’t about sequence but about maintaining the cognitive depth to know when AI is actually helping versus just making us feel productive—thanks for the nuanced take.

@h3ath3rly 6 days ago

This is an excellent article with practical and actionable insights. Thanks so much, Ines!!

Lorin Ricker 6 days ago

My prompts to an AI are (hardly ever) just one sentence; sometimes I worry that I'm falling into a TL;DR-prompt-context hole. But this article gives me fresh courage, and additional specificity. In particular, I like the advice and examples to hold off the AI-as-buddy sycophancy... I need the critique and probing feedback, not the bunnies-and-flowers of how "great" my idea is! Thanks for a great article!

Shashaank Bhaskar 5 days ago

One of the best articles I've read recently. Looking forward to more of them, Ines!

@Andrea.lucard about 24 hours ago

Really useful also in explaining why I feel like I understand something then am unable to articulate it. The positioning prompts are excellent. Thank you.