There’s growing concern that an over-reliance on AI could lead to apathy and cognitive decline. So, what can we do to stay sharp in an AI-driven world?
Artificial intelligence (AI) is no longer just a futuristic fantasy – it’s here, and it’s thinking for us.
From generating essays and diagnosing diseases to analysing legal documents and coding software, AI is increasingly taking over tasks that once required years of human expertise.
But here’s the twist: as AI gets smarter, are we getting dumber?
There’s growing concern that an over-reliance on AI could lead to a kind of cognitive atrophy – let’s call it AI apathy.
If machines can handle the tough thinking, will humans stop flexing their own mental muscles? Will students, professionals, and everyday knowledge workers slowly lose the ability (or motivation) to problem-solve, analyse and create on their own?
The signs are already here.
Research on how we use GPS trackers suggests that when we rely too heavily on automated navigation, our spatial memory declines. Studies on pilots show that those who rely on autopilot lose critical situational awareness skills.
And psychologists have documented the so-called ‘Google Effect’ – which is our tendency to forget information because we know we can just look it up online again.
So, what happens when AI isn’t just giving us directions or retrieving facts but actually doing our thinking?
Let’s unpack the risks, the science behind cognitive decline and what we can do to stay sharp in an AI-driven world.
The Effort Factor: Why thinking hard matters
Here’s a paradox: struggle is good for learning.
Cognitive Load Theory tells us that our brains need a certain level of difficulty to process information deeply. If something is too easy – like, say, getting AI to write an essay for you – your brain doesn’t engage enough to form lasting knowledge.
Psychologists call this the productive struggle: the idea that working through difficult problems builds resilience, confidence and deep understanding. Canadian-American psychologist Albert Bandura, a pioneer in motivation research, called it mastery experience – the boost in confidence that comes from figuring something out on your own.
The problem? AI can short-circuit this process.
When students or professionals rely too much on AI, they miss out on the intellectual workout that strengthens their analytical skills.
Then there’s Self-Determination Theory, a well-established motivation framework, which highlights competence as a key driver of engagement.
People feel motivated when they overcome challenges – not when answers are handed to them on a silver platter. If AI makes tasks too easy, it risks stripping away the satisfaction of learning, reducing motivation in the process.
It’s the difference between climbing a mountain and taking a helicopter to the top. Sure, you get the view either way, but one experience builds strength, resilience, and pride – the other is just a free ride.
The GPS Effect: How automation can dull our skills
If you’ve ever used a GPS or satnav and realised later that you have no idea how you got to your destination, you’ve experienced automation-induced cognitive decline in action.
Studies have found that people who frequently use GPS show poorer spatial memory and a weaker ability to navigate without assistance.
In one experiment, people who relied heavily on GPS performed worse on navigation tasks that required them to remember routes. Even more concerning, frequent GPS users showed declines in hippocampal function – the brain region responsible for spatial memory.
It’s a simple case of ‘use it or lose it’. When we outsource a cognitive task to technology, our brains adapt by shifting resources elsewhere – or just going idle.
Convenience comes with a cost. If AI takes over too much of our cognitive workload, we may find ourselves less capable of deep thinking when it really matters.
AI and Knowledge Work: The rise of intellectual sluggishness
Now, let’s bring this back to AI.
If navigation apps weaken our spatial awareness and autopilot dulls a pilot’s situational awareness, what happens when AI starts handling intellectual tasks – writing reports, solving math problems, synthesising research?
The risk is intellectual complacency.
If AI is always there to generate answers, will students and professionals stop pushing themselves to think critically? Will they trust AI blindly, without questioning its logic?
Early evidence suggests this is already happening. Education researchers report that tertiary students who used generative AI tools for their essays ultimately performed worse in their examinations, raising questions about the role of AI in education.
In some creative fields, studies have shown a mixed picture with generative AI increasing creativity but reducing the number of new ideas. In one study, readers found Gen AI-enabled stories more enjoyable, but over time, found these stories became more and more similar to stories by humans alone.
Even beyond individual skills, there’s a bigger issue: what happens to innovation if AI users become passive consumers of machine-generated knowledge instead of active thinkers?
Great scientific discoveries, philosophical insights and artistic breakthroughs don’t come from taking shortcuts – they come from wrestling with tough ideas.
If we let AI do all the heavy lifting, we risk stagnation in human creativity and critical thought.
How to avoid AI apathy
So, should we just abandon AI and go back to pen and paper? That’s not happening.
AI is an incredibly powerful tool, and the key is learning how to use it without losing ourselves in it. Here’s how in five easy steps:
1. Use AI as a thinking partner, not a crutch
Instead of letting AI think for you, use it to enhance your thinking. For example, students can use AI to brainstorm ideas but still write their own essays. Professionals can use AI for research but critically evaluate those findings rather than blindly accepting them.
2. Prioritise process over easy answers
Schools and workplaces can emphasise the workings of how you’ve arrived at a conclusion, not just the final answer. Requiring explanations, alternative solutions and independent reasoning helps maintain cognitive engagement.
3. Practice “unplugged” thinking
Just as pilots need manual flying refreshers, knowledge workers might benefit from AI-free exercises. Writing an essay without AI, doing mental math or brainstorming without digital help keeps the brain active and adaptable.
4. Think critically about AI
AI isn’t perfect – it can be biased, make mistakes or produce misleading results. Teaching AI literacy ensures we remain active thinkers, questioning and verifying AI outputs rather than passively accepting them.
5. Use AI as a Socratic tutor
AI can be used to guide learning rather than replace effort. For example, AI tutors can give hints instead of direct answers, helping students reach solutions independently.
The goal isn’t to reject AI – it’s to create a balanced relationship where AI enhances human intelligence rather than replacing it.
The future of thinking in an AI world
AI is here to stay, and its ability to outperform humans in certain cognitive tasks is only growing. But that doesn’t mean we should surrender our intellectual abilities to it.
The risk of AI apathy is real: if we rely too much on AI, our own analytical and creative skills could wither from disuse. But if we consciously design education, work and daily life to keep human thinking in the loop, we can maintain our cognitive edge.
In a world where AI is getting smarter, our challenge is to make sure we don’t get dumber.
Because, at the end of the day, AI might be able to think for us – but it’s up to us to make sure we keep thinking for ourselves.
Associate Professor, Global Health, Nossal Institute for Global Health, University of Melbourne; Advisory Board Member, Australian Alliance for Artificial Intelligence in Healthcare; General Practitioner, Better Health Network; Fellow, Climate Council.
This article was first published on Pursuit. Read the original article.