Artificial intelligence, as a powerful personal assistant, is transforming our relationship with intellectual production, expression, and the dissemination of ideas. It not only reduces the inherent frictions in creation – whether writing, analyzing, or explaining – but also reshuffles the dynamics of influence and intellectual legitimacy.
[ Estimated reading time: 5 minutes ]

Structured summary shareable on social media:
🤖 AI, a tool for intellectual empowerment and a catalyst for change
[ ⏱️ Estimated reading time for the full article: 5 minutes ]
📚 Long reserved for experts skilled in rhetoric and jargon, intellectual legitimacy is becoming more accessible thanks to artificial intelligence. By making structuring, analysis and writing tools accessible to all, AI brings solid ideas to the fore and exposes empty rhetoric. It acts as a revealer of meaning, refocusing public debate on the relevance of content rather than the technicality of form.
🚀 In the context of ecological, social and economic transitions, this redistribution of means of expression opens up new possibilities. Everyone can now actively contribute, without being held back by language barriers or codes reserved for an elite few. For some, AI could even become an opportunity for the less skilled, as Philippe Silberzahn suggests.
😨 However, fears still dominate: employment, deepfakes, energy consumption. But these risks must not overshadow the opportunities. Technical revolutions have always caused concern… before advancing humanity. Data centres certainly consume more energy, but their efficiency is improving. Compared to other sectors, AI’s carbon footprint remains limited.
⚖️ Smart regulation is needed: not a paralysing ban, but an evolving framework that would arbitrate the risks of use… and non-use. In sensitive areas such as health and education, AI can be decisive. Rather than pitting AI against an idealised humanity, it is better to confront it with real alternatives – which are often imperfect.
📰 Journalism is a good example of this tension. While newsrooms are still publicly debating the ethical use of AI, many journalists are already quietly using it to improve their work. Just as calculators freed mathematicians, AI could free journalists from drudgery and allow them to refocus on the essentials of their profession: understanding, analysing and reporting.
🧠 What if, far from weakening our thinking, AI encouraged us to be more critical? A recent study shows that a dialogue with ChatGPT can change deeply held beliefs, not through direct confrontation, but by engaging in empathetic reasoning. Far from feeding misinformation, AI can thus accompany inner transitions.
❓Finally, if this article was co-written with the help of AI, does that make it less relevant? Should the judgement be based on the tool… or on the value of the ideas it allows to be expressed? What if AI, when used properly, became a lever for intellectual empowerment, an ally for fairer, more critical and informed transitions?
#ArtificialIntelligence #CriticalThinking #EcologicalTransitions #ResponsibleInnovation #Journalism #TechPhilosophy #PositiveAI
AI: the end of the reign of words and an accelerator of ideas
Between irrational fears and transformative opportunities
Towards informed, not paralysing regulation
A lever for critical thinking and empowerment
AI: THE END OF THE REIGN OF WORDS AND AN ACCELERATOR OF IDEAS
For a long time, mastery of language, style and conceptual tools has allowed certain people to monopolise attention and exercise sometimes undeserved authority. AI is challenging this dynamic by making high-level writing and analytical skills accessible to all. From now on, the value of an argument no longer depends on rhetorical skill or technical vocabulary, but on the soundness of the ideas it defends.
In this sense, AI acts as a catalyst for clarification and discernment. Where the Sokal affair[1] highlighted the manipulative use of jargon to confer an artificial aura of intellectual depth, AI, by democratising access to structuring and argumentation tools, unmasks bluffing. It allows us to focus collective attention on the relevance of content, accelerating the demise of weak ideas and propelling truly transformative proposals forward.
In the context of the ecological, social and economic transitions we must make, this cognitive revolution opens up unprecedented opportunities. It gives everyone back the ability to actively participate in debates and propose solutions without being hindered by language or rhetorical barriers. Ultimately, as strategy professor Philippe Silberzahn points out, could AI represent an opportunity for the least qualified[2]?
BETWEEN IRRATIONAL FEARS AND TRANSFORMATIVE OPPORTUNITIES
In general, it is essential not to fall into the trap of negativity bias[3], which consists of focusing more on the risks and abuses of AI (personal data breaches, manipulation via deep fakes, feared job losses) rather than on its immense opportunities. As Philippe Silberzahn points out, automation has always raised fears[4], but historically it has led to increased productivity, market expansion and the creation of new jobs. Similarly, AI is not intended to replace humans, but to be a powerful tool that multiplies our capabilities. Rather than giving in to fear or waiting to see what happens, we must learn to integrate it intelligently in order to maximise its transformative potential.
Another aspect that is often mentioned is the energy impact of AI[5]. According to the International Energy Agency, data centres currently account for around 1-2% of global electricity demand. While the rise of AI will lead to an increase in this consumption, it will remain contained thanks to technological efficiency gains. Since 2008, the energy intensity of computer chips has decreased by more than 99%, allowing for the growth in usage to be absorbed without an explosion in energy consumption. According to a recent report by Goldman Sachs Research[6] examining the impact of the rise of artificial intelligence, the share of global electricity consumption attributed to data centres could reach 3-4% by 2030, while their CO₂ emissions are expected to more than double, reaching 0.6% of global energy emissions. That said, although AI models consume more than traditional Internet searches, their overall impact remains lower than that of other sectors such as industry, transport and air conditioning, which, for example, consumes 10% of global electricity demand[7]. The energy issue must therefore be addressed with nuance, taking into account the optimisation and investment efforts of technology companies in low-carbon energies.
TOWARDS INFORMED, NOT PARALYSING REGULATION
Finally, it is essential to adopt a balanced approach to AI regulation. To be reasonable, it should be counterfactual, weighing the pros and cons, rather than hiding behind a dogmatic precautionary principle. Any sensible regulation must be based on a trade-off between the risks of use and the risks of non-use[8]. Excessive caution can slow down major advances, particularly in fields such as medicine, where AI can accelerate research and improve patient care. As science philosopher Maarten Boudry points out, precaution sometimes means taking action[9], and the adage that we should not act if there is a risk is simply not realistic because, as he says, sometimes jumping is safer than standing still. However, we must not ignore the fact that it can cause harm, such as when a self-driving car causes an accident. While regulation must ensure ethical and safe use, it must not stifle innovation by imposing disproportionate constraints. Comparing AI to an ideal and unrealistic situation prevents us from measuring its true impact compared to human alternatives, which are often imperfect. A pragmatic and evolutionary approach is therefore necessary to maximise its benefits while controlling its risks.
While the debate on the use of AI is occupying editorial committees[10] in the press, many journalists are already using it on a daily basis, often discreetly and without publicly acknowledging it. For fear of criticism or internal disapproval, some young journalists prefer not to reveal that they rely on ChatGPT or other AI tools to structure their articles, offer new perspectives, rephrase passages or even write entire pieces. This situation means that confusion persists about the ethical and professional framework surrounding these practices: while some newsrooms strictly regulate the use of AI, others officially prohibit it while implicitly tolerating its use behind the scenes. This paradox reveals an ongoing transition in journalism, where AI, rather than being perceived as a threat, could be recognised as a legitimate professional asset.
A LEVER FOR CRITICAL THINKING AND EMPOWERMENT
In this perspective, back in April 2023, in his article AI, the new typewriter[11], Antoine Bueno defended the idea that, just as the advent of the machine freed mathematicians from the drudgery of certain calculations, AI frees journalists from repetitive editorial work, allowing them to refocus their profession on analysis and creativity. Like the calculator, which did not kill mathematicians but allowed them to go further, ChatGPT is a writing aid rather than a gravedigger for journalism and literature. Although it poses a challenge to writers, who are impressed by its creativity[12], it does not replace their soul. Similarly, although deepfakes are worrying, they could strengthen our critical thinking when it comes to images and accelerate an anthropological evolution in our relationship with reality[13]. Just as printing forced us to distinguish between truth and falsehood by teaching us that not everything that is written is true, the proliferation of falsified images could make us more vigilant. Far from heralding a collapse of thought, AI could, on the contrary, be a lever for intellectual emancipation, forcing us to redefine our demands for truth and discernment. That said, this does not answer all the questions raised by the use of AI, including the legitimate question of moral prejudice raised by many artists [14], for example.
By viewing AI as a tool for intellectual emancipation whose consequences are determined by how it is used, Antoine Bueno was right on the mark. Contrary to the widespread belief that conspiracy theorists are gullible, an article in Psyche magazine[15] highlights their active engagement in seeking information and constructing sophisticated arguments. The quest for discovery and the intellectual pleasure we derive from our personal research play a key role in our adherence to different theories. In this regard, a preprint study from 2024[16] showed that interactions with ChatGPT-4 Turbo significantly and lastingly changed the beliefs of people who adhered to certain theories. Rather than simply countering preconceived ideas with facts, the model engaged in reasoning with users, questioning their assumptions and stimulating their critical thinking. This dialogue allowed for a gradual repositioning of beliefs, drawing on the individual experiences of the participants. Far from encouraging misinformation, AI, by adapting to individuals’ experiences, can therefore promote critical thinking and help break free from dogma.
Moreover, does the fact that this article was written using artificial intelligence make it irrelevant? Does this method of production call into question the credibility of the ideas developed in it? If not, why should we be uncomfortable with a tool that, when used properly, improves our discursive abilities? Is it really its use that is problematic, or rather the idea we have of it? Rather than seeing AI as a threat to critical thinking, shouldn’t we consider it as a lever for intellectual deepening and emancipation? Under these conditions, wouldn’t it be a lever for accelerating fair and informed transitions?
Jonathan Guéguen with Chat GPT
[1] Nicolas Journet, ‘The Sokal Affair: Why France?’, Sciences Humaines, 2005.
[2] Philippe Silberzahn, ‘What if AI were an opportunity for the least qualified?’, Philippe Silberzahn Blog, 2023
[3] Laurent Bègue-Shankland, ‘Phébé – Why we pay more attention to the negative than the positive,’ Le Point, 2020.
[4] Philippe Silberzahn, ‘Seven reasons why you are already missing the AI revolution,’ Philippe Silberzahn Blog, 2023.
[5] Hannah Ritchie, ‘What’s the impact of artificial intelligence on energy demand?’, Sustainability by Numbers, 2024.
[6] Goldman Sachs Research, AI/Data Centres’ Global Power Surge: The push for the “Green” data centre and investment implications, 2025.
[7] Howarth, N., Camarasa, C., Lane, K., & Risquez Martin, A., Keeping cool in a hotter world is using more energy, making efficiency more important than ever, International Energy Agency, 2023.
[8] Philippe Silberzahn, ‘Five principles for intelligently regulating AI’, Philippe Silberzahn Blog, 2024.
[9] Maarten Boudry, ‘It’s time to bury the precautionary principle,’ Le Point, 2021.
[10] Coppélia Piccolo and Florian Gouthière, ‘I wrote 20 lines and ChatGPT wrote the other 40: how AI is changing the practices of journalists,’ Libération, 2025.
[11] Antoine Buéno, ‘AI, the new typewriter,’ Libération, 2023.
[12] William Galibert, ‘When writer Hervé Le Tellier takes on ChatGPT: will AI kill literature?’ RTL, 2023.
[13] Antoine Buéno, ‘Deepfakes, a spiritual revolution,’ L’Opinion, 2024.
[14] Sigal Samuel, The Artist’s Best Argument Against AI, Future Perfect newsletter, Vox, 2025
[15] Stephen Gadsby & Sander Van de Cruys, ‘The surprising role of deep thinking in conspiracy theories,’ Psyche, 2024.
[16] Costello, T. H., Pennycook, G., & Rand, D. G. (2024). Durably reducing conspiracy beliefs through dialogues with AI.
0 Comments