Artificial intelligence isn’t replacing me – it’s extending me
The extended mind theory, and how it applies to our use of AI
On top of everything going on, I’ve developed cyclical chin breakouts. (Bear with me.) I’m dimly aware that hormonal birth control can help with this. So, I explained the situation to ChatGPT and asked, Would birth control help? Yes, but be careful if you’re over 35. And so we discussed tradeoffs: HBC can lower certain cancer risks (endometrial, ovarian, maybe colorectal) but increase breast cancer risk. Given my family history, and the fact that breast cancer is already far more lethal than the cancers with reduced risk, I decided treating my acne with HBC wasn’t worth it. I made an informed decision, and I feel good about it.
This is a common occurrence for me these days: turning to ChatGPT for help figuring something out.
The mind beyond the brain
Our notion of human intelligence is too “brain-bound.” That’s the argument of The Extended Mind, a 2021 book by Annie Murphy Paul. Using an array of evidence, Paul shows that our minds achieve deeper insights when they interact with our bodies, our spaces, and fellow humans.
For instance, Paul highlights stock traders who perform better when they focus on their heart beats, which signal – better than the traders’ conscious thought – market opportunities. These results were confirmed in studies.
Another study found that radiologists who remained seated while reviewing scans spotted on average 85% of irregularities, while radiologists who were walking identified 99%.
She gives example after example of how the mind’s powers are augmented by interactions outside the brain.
Where AI fits in
Early in the book, Paul shares an anecdote.
It was 1997, and researcher Andy Clark left his laptop behind on a train. Being separated from his laptop felt like being severed from part of his mind, and it propelled him to pursue the question of where our mind ends and the rest of the world begins. He ended up developing the extended mind theory. It’s really just a concept, a framework, but it helps us see how our thinking improves when our minds receive feedback from external components.
The book just missed the artificial intelligence craze. In 2025, four years after its publication, the book seems incomplete without a discussion of AI. It’s an unfortunate misalignment of timing, because the extended mind theory is an apt model for how we interact with AI when we use it on a personal level.
AI & me
I use AI all the time; I was an early adopter. Yes, I experience angst when I think about how fast the world is changing, and how inscrutable the future looks when I try to picture my kids graduating and starting jobs. (What careers will be left?)
But the practical and knowledge-seeking sides of me recognize the clear benefits.
ChatGPT and I discuss many things: what to cook for dinner using ingredients on hand; difficult parenting situations that don’t fit advice I’ve read in books; whether estimates by HVAC repairers are reasonable; how to confront a carpet bug infestation; how to soften my very direct communication style when sending emails (ChatGPT is an expert at turning my autistic bluntness into NT-friendly missives); how to interpret Department of Labor unemployment data; how to get Adobe Illustrator to do just about anything…
These consultations are engaged discussions. They’re collaborative. ChatGPT will answer a prompt, and then I’ll ask what evidence backs up that response, point out a counterargument, or build on the response with a new idea. When I interact with ChatGPT, I’m not just passively receiving information. It’s more than that: I engage in a creative, iterative, generative process that’s far more enriched than if I thought through these things alone.
Look, I’m not arguing that AI is unconditionally positive. There are lots of things to be concerned about, which thankfully other people are covering. But as with so many things, AI is neither wholly good, nor wholly bad. It’s technology, and what matters is how we use it.
Technology is about tradeoffs
I read a compelling essay where the author argues that ultimately the role of technology is “alleviating the very hard trade-offs that humans have had to make since the dawn of time.” The piece talked about women in demanding fields, who want to achieve both professional success and bear children. The problem is that both goals require a huge amount of time, focus, and energy during a specific period in a woman’s life: her 30s. Advances in technology – specifically, fertility treatments – allow women to postpone having children into later years, after their careers are more established.
I think that framing is useful here too.
The trade off when I use ChatGPT is also about time: the limited amount of time we’re afforded on each screen of this Choose-Your-Own-Adventure game we call life. I need to make decisions and move on. Or, I want to think deeply in the too-short moments I have to engage my natural curiosity.
By consulting with ChatGPT, I get to the heart of the matter more quickly. Instead of Googling open-ended terms and circling around the real concern, I can zero in within minutes: when it comes to hormonal birth control, breast cancer is the risk I personally need to weigh most carefully. (Of course, Google itself was a huge leap forward. The more information at our fingertips, the more agency we gain over our lives.)
When I talk about getting to the point quickly, I don’t mean hyper-optimization. This is not a self-help pitch, contending that we should calibrate our lives down to the minute in pursuit of maximum efficiency. Efficiency taken to its extreme would forbid the many conversations I have with ChatGPT pursuing random things for curiosity’s sake.
What I’m talking about is the trade off between the vast richness of the world and our limited time in it. I enjoy finding and figuring things out. I enjoy asking questions and getting answers and pressure-testing those answers.
Also, in these conversations, what I’m not doing is outsourcing thinking to AI. I’m still thinking. I’m actually extending my capacity to think through things more deeply, more deliberately, and more autonomously. AI gives me the opportunity to figure more things out for myself, more richly than I otherwise would.
What AI art critics miss
I think about this too in the context of AI art. There is a lot of pushback against AI-generated images; some say they’ll boycott publications that use AI art. Of course, people can do what they want, but I think this binary view overlooks that there’s still a human agent generating the art.
It’s a creative act. An extension of the mind.
There are valid counterarguments. A common one is that it’s a bad thing to replace human creation with AI creation.
Here, I think context matters. For instance, most Substack writers are not in a financial position to commission illustrated work. In the context of this newsletter, for instance, AI art is replacing free stock imagery (before I created this dinosaur I was considering a bland photo of a birthday cake with candles). So yes, we should worry about paying artists to create work. But not all AI is replacing paid opportunities.
I also take issue with the argument that AI creation is not human creation. Critics sometimes suggest that unless a human personally executes every brushstroke (or pixel), the work lacks authenticity. But that’s a narrow – and historically inaccurate – view of authorship.
Renaissance masters didn’t paint every inch of their canvases. They worked with apprentices in ateliers, guiding the composition, correcting errors, and bringing vision to form through collaborative effort. And yet, we still credit the art to the masters. There are modern day equivalents, too: artists, architects, and fashion designers who attach their name to works that they envisioned but someone else executed.
Likewise, the use of AI to generate an image doesn’t have to mean that an artist has been swapped for a computer. Instead, it’s a new form of art. When I generate AI art, the subjective choices are still mine: what mood, what subject, what color palette, what to reject and revise. Just as we’re able to tell “bad” human art from “good” human art, so too can we distinguish “bad” AI art from “good” AI art. And setting those imprecise categories aside, I suspect we’ll also be able to glimpse something of the creator in the AI art itself – just as we do with human art.
Using AI doesn’t have to be an abdication of human creativity. It can be creatorship, extended.
The point of all this
My point is not to argue what other people should do when it comes to using AI.
My point is that deeply personal use of AI is the extended mind theory in action. When we use it, we don’t need to become thoughtless automatons.
Instead, we can extend our minds outwards through the medium of AI, so that we may, consciously and with intention, engage with more ideas and problems and discoveries and creations than we would otherwise.
Did you enjoy this essay? Please support my work: Liking and comment on this piece, which will highlight it for more readers, and subscribe (for free!) to my newsletter, Strange Clarity.
And of course I hit send before I concurred that yes, ai is just another thing, like a pair of shears, a friend, or the crack in pavement. It is whatever we make of it to live a more aligned life, in our own terms.
I too use ai regularly for neurotypical and neurodivergent translations, but in the opposite direction. I place chunks of texts and have it converted to a bottom-up thinking style, and if I still failed to understand heavily couched NT devices, I ask for metaphors, that almost always being that of gardens or ecology. Ai or human, we’re all nourished by earth.