“The greatest worry in these times of generative AI is not that it may compromise human creativity or intelligence, but that it already has.” — Robert Sternberg

The use of AIs and LLMs has become increasingly expansive and accessible to everyone; students write their assignments with AI, programmers create code using AI, and some people even use AI as a psychologist — not to mention those who have “married” an LLM.

I am not here to demonize the use of these tools. Clearly, they have a lot of utility for simple and basic tasks. They can help us streamline daily life and allow us using our time for more important matters. But what happens when we also delegate those important tasks to artificial intelligence? What happens when we replace artists, psychologists, doctors, and programmers with a predictability calculator?

More and more, people stop thinking and let an LLM think for them. The effects on the brain are evident: our neural pathways atrophy, synapses stop forming, memory deteriorates, critical thinking unravels, creativity becomes scarce, and offloading reduces our cognitive load further and further. Just as failing to exercise muscles causes them to atrophy, we are atrophying our brains by not using them. As Clark and Chalmers (1998) argued in The Extended Mind, delegating cognitive functions to external tools is not new — from stone inscriptions to calculators. But when we outsource not just calculations, but also creativity, ethical judgment, and critical thinking, the risk becomes deeper: not merely supporting the mind, but replacing it.

And here comes another crucial point: automation bias. Decades of research in aviation and healthcare show that when automated systems are present, people tend to blindly trust them and check less (Mosier et al., 1998). In other words, the more “safety” we feel from a machine, the less we question its outputs. The danger is obvious: by accepting AI’s answers without critique, we chew less on ideas and swallow whatever is handed to us, regardless of quality or truth.

With the overload of information we receive today, the idea of not having to think anymore is tempting; after all, our brains are already exhausted from the excess of stimuli from social media, advertisements on every corner, and both digital and analog noise pollution. Of course, you may want to let your brain “rest” and leave AI to handle that tedious task. But then, what exactly are you feeding your brain? What stimuli have you absorbed?

Make no mistake: the LLM you use was designed and trained by another person — and that person may manipulate the information according to certain political views, values, and opinions. By becoming dependent on the tool, you also become dependent on that information, which may be biased. Accepting all this without questioning is letting yourself be manipulated by another. As Zuboff (2019) warns in The Age of Surveillance Capitalism: “whoever controls the data controls behavior.” Have you ever asked yourself who this “other” is?

This is where the metaphor of lobotomy begins to echo: In the 20th century, lobotomy was defended as a way to “bring efficiency” and “calm” disturbed minds. But in the process, it removed people’s agency, emotion, and critical thinking. Today, by outsourcing our reasoning to AI, we are not cutting our frontal lobes — but we are voluntarily atrophying them. Lobotomy promised peace but cost thought; AI promises efficiency but may cost cognition. If once it was a scalpel making the cut, today it is comfort that does so.

The reflection I propose here is that we reclaim the humanization of thought: read texts by real authors, consume and appreciate works by real artists, listen to music composed by real people, return to writing on your own and to engaging in manual activities away from screens. Do not surrender to this process of mass “robotization,” of flawless executions, of Fordist productions. Do not let yourself be lobotomized: only we are capable of reflection. As The Illusion of Thinking shows, the so-called “capacity for reflection” in AIs is nothing but an illusion. Do not be deceived: your thinking is rare, it is precious. “Cogito, ergo sum” — “I think, therefore I am,” said Descartes.

Ultimately, the question is not whether we should or should not use AI, but how to use it without losing sight of our own agency. After all, what is the price of efficiency if it comes with the abdication of our thinking?


References

  • Risko, E. F., & Gilbert, S. J. (2016). Cognitive Offloading. Trends in Cognitive Sciences, 20(9), 676–688. https://doi.org/10.1016/j.tics.2016.07.002
  • Goddard, K., Roudsari, A., & Wyatt, J. C. (2012). Automation bias: a systematic review of frequency, effect mediators, and mitigators. Journal of the American Medical Informatics Association, 19(1), 121–127. https://doi.org/10.1136/amiajnl-2011-000089
  • Lyell, D., Magrabi, F., Raban, M. Z., & Coiera, E. (2017). Automation bias and verification complexity: a systematic review. Journal of the American Medical Informatics Association, 24(2), 423–431. https://doi.org/10.1093/jamia/ocw105
  • Massaro, M., et al. (2025). Are We Offloading Critical Thinking to AI? A Review of Empirical Studies on Cognitive Offloading and Decision-Making with Artificial Intelligence. Societies, 15(5), 66. https://doi.org/10.3390/soc15050066
  • Jose, A., Chatterjee, K., & Aithal, A. (2025). The cognitive paradox of AI in education: between enhancement and erosion. Frontiers in Psychology, 16, 1550621. https://doi.org/10.3389/fpsyg.2025.1550621
  • Heersmink, R. (2024). Use of large language models might affect our cognitive skills. Nature Human Behaviour, 8(5), 805–806. https://doi.org/10.1038/s41562-024-01859-y
  • Yan, L., Greiff, S., Teuber, Z., & Gašević, D. (2024). Promises and challenges of generative artificial intelligence for human learning. Nature Human Behaviour, 8(10), 1839–1850. https://doi.org/10.1038/s41562-024-02004-5
  • Risko, E. F., Gilbert, S. J., et al. (2019). A role for metamemory in cognitive offloading. Quarterly Journal of Experimental Psychology, 72(5), 1068–1083. https://doi.org/10.1177/1747021818798691
  • Dantas, A., Santos, M., & Pereira, J. (2024). Impactos da inteligência artificial generativa na formação docente: oportunidades e riscos para o pensamento crítico. Revista Brasileira de Política e Administração da Educação (RBPAE), 40(1), e020. [SciELO/ANPAE open access]
  • Silva, M. (2017). Educação, mídias e o desafio do pensamento crítico. Educação & Sociedade, 38(139), 331–347. [SciELO Brasil]
  • Dias, A. P., & Souza, R. (2024). Docentes e IA generativa: tensões entre facilitação e heteronomia cognitiva. Revista Estudos IAT, 14(1), 1–18. [SciELO / open access]
  • Bastos, F. (2017). Internet, informação e o risco de “pensar menos” na educação. Educação em Revista, 33, 1–20. [SciELO/FCC]
  • Ban, T. A. (2001). The role of psychosurgery in the history of psychiatry. Journal of the History of the Neurosciences, 10(1), 79–92. https://doi.org/10.1076/jhin.10.1.79.5636
  • El-Hai, J. (2005). The lobotomist: A maverick medical genius and his tragic quest to rid the world of mental illness. Hoboken, NJ: Wiley.
  • Fernandes, T. L., & Fernandes, A. R. (2020). A história da lobotomia: da ascensão à queda. Revista Brasileira de História da Medicina, 10(2), 45–58.
  • Freeman, W., & Watts, J. W. (1942). Psychosurgery: Intelligence, emotion and social behavior following prefrontal lobotomy for mental disorders. Springfield, IL: Charles C. Thomas.
  • Pressman, J. D. (1998). Last resort: Psychosurgery and the limits of medicine. Cambridge: Cambridge University Press.
  • Valenstein, E. S. (1986). Great and desperate cures: The rise and decline of psychosurgery and other radical treatments for mental illness. New York, NY: Basic Books.
  • Shojaee, P., Mirzadeh, I., Alizadeh, K., Horton, M., Bengio, S., & Farajtabar, M. (2025). The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity. arXiv. https://doi.org/10.48550/arXiv.2506.06941