Constable’s The Hay Wain (1821) and Turner’s The Fighting Temeraire (1838) stand as a perfect contrast in artistic vision: one rooted in careful observation and objective detail, the other dissolving form into atmosphere and impression. Two masterpieces, painted with the same humble tool, reveal how a paintbrush never determines the outcome—only the mind guiding it does.
AI, Creativity, and the New Luddism: Why We Fear the Tools We Need
By GRAHAM JOHN – Monday, November 24, 2025
Public fear of artificial intelligence has reached a level that borders on the irrational. According to the prevailing mood, AI will steal our jobs, undermine our intelligence, replace our creativity, and—if one believes certain headlines—possibly bring about the end of civilisation. But beneath the noise lies something far older than AI itself: an instinctive suspicion of any tool that changes the way we live. To see this clearly, we need only look at two harmless objects nobody fears: a paintbrush and a car.
A paintbrush does not create a masterpiece. It does not know what beauty is. It waits for a human hand, a vision, a temperament. In the hands of a child it produces a joyful mess; in the hands of Turner it produces glory. AI is no different. It is an instrument—one that extends a human mind in the same way a brush extends a human hand. The quality of what emerges depends entirely on the intelligence, integrity, and clarity of the person guiding it. A tool never threatens creativity; it reveals it.
If AI resembles a paintbrush in one respect, it resembles a car in another. Cars take us places faster, and used well, they open horizons. Used exclusively or without balance, they can weaken the body. This analogy is often used to claim that AI will weaken the mind. But the analogy is flawed. A car replaces physical exertion; AI replaces mental drudgery—the repetitive tasks of summarising, reformatting, extracting, structuring, and rearranging. These are not the deep muscles of thought. They are the administrative chores surrounding thinking. The true work of intellect—forming ideas, making distinctions, constructing arguments, seeing meaning—cannot be automated. If anything, by removing the tedious scaffolding, AI frees time and energy for genuine mental effort. A car makes you lazy only if you stop walking. AI makes you intellectually weak only if you stop thinking.
Why, then, the fear? Because every major tool that shifts the distribution of power meets resistance. The printing press, the camera, the calculator, the typewriter, and the home computer were all greeted with panic: “This will destroy us.” The original Luddites were not foolish; they were anxious workers who feared losing control over their lives. Today’s AI panic echoes the same instinct: the sense that human uniqueness is under threat, that our specialness is being eroded, and that thinking itself is slipping away from us. But the fear is misplaced. AI does not diminish humanity; it exposes how much of what we do is mechanical. What remains—the part that reasons, chooses, shapes, and judges—becomes more apparent, not less.
Who Were the Luddites?
The Luddites were skilled English textile workers who, between 1811 and 1816, destroyed the new industrial knitting machines that threatened their livelihoods. Contrary to popular caricatures, they were not anti-technology. They opposed only those machines that allowed factory owners to replace trained craftsmen with cheaper, unskilled labour.
The Luddites became iconic because their struggle marked a turning point in the Industrial Revolution: the moment when ordinary workers confronted the immense power of mechanisation and the social upheaval it unleashed. Today their name has become shorthand for anyone who fears new technology, but the historical reality is more complex. They symbolise not ignorance, but anxiety—anxieties that return whenever society is transformed faster than people can adapt.
There is a real danger in AI, but it is not the one that sells newspapers. The problem is not that AI will start thinking for us; the problem is that we may stop thinking at all. Intellectual weakness begins not when a tool becomes powerful, but when a person relinquishes their obligation to judge what the tool produces. A student who uses a calculator without understanding arithmetic becomes weaker; a student who understands arithmetic but uses a calculator to move faster becomes stronger. The determining factor is never the tool. It is always the user—their vigilance, judgement, and willingness to engage the mind.
But there is a second, more profound danger: the political use of AI. Just as the atom bomb transformed not only warfare but also the entire balance of global power, AI has the capacity to reshape the relationship between governments and the governed. Modern democracies already wrestle with the tension between freedom and safety, and the temptation to manage society through surveillance and regulation has grown with each decade. AI magnifies that temptation beyond anything previously imaginable. It makes possible a level of oversight, prediction, and behavioural control that earlier governments could only dream of.
A democracy ceases to function as such the moment crime becomes impossible, because the measures required to make crime impossible—total surveillance, automated enforcement, continuous behavioural monitoring—are incompatible with human liberty. Nature tolerates danger, disorder, and loss of life; human beings do not. And governments, in their pursuit of the illusion of complete safety, may construct systems that eliminate risk only by eliminating freedom. The deeper paradox is that such systems are never applied equally. The governed are monitored, regulated, and constrained, while the governors remain untouched by the very controls they impose. Power always exempts itself; history offers no exception.
Why Power Always Exempts Itself
The claim that governments never apply the rules to themselves is unsettling, but history supports it. Across civilisations, there is no clear instance of a ruling class voluntarily subjecting itself to the same scrutiny and constraints imposed on the public. When power appears to limit itself, it is usually responding to crisis, shifting authority sideways, or making small concessions to preserve larger privileges.
The reason is structural rather than personal: those who govern have strong incentives to expand discretion, preserve secrecy, and avoid accountability—especially in the name of “security.” Surveillance, regulation, and enforcement always flow downward, never upward. Citizens can be monitored; governments cannot, because they control the monitoring apparatus.
AI intensifies this ancient pattern. It offers tools for oversight and behavioural control that earlier governments could only imagine. The danger is not malevolent machines but human institutions armed with unprecedented capabilities and no historical record of restraining themselves. Power has never bound itself. It has only ever been bound by others.
AI, Camus, and the Logic of Modern Dictatorship
Camus saw that the absurd was not only the indifference of nature but humanity’s refusal to live with that indifference. Confronted with uncertainty, we build systems that promise perfect order. This is the impulse at the heart of Caligula: the longing for total coherence, total transparency, total control—symbolised by the moon, which in Camus is both the emblem of perfect illumination and the ancient symbol of lunacy, revealing the madness behind the desire for absolute clarity. He develops this warning far more fully in The Rebel, where he shows how the quest for perfect rational order becomes the seed of modern totalitarianism.
Every form of government, ancient or modern, has tended toward this ideal. Power seeks stability, and stability demands predictability. The temptation is always the same: eliminate risk, eliminate disorder, eliminate dissent. Yet such perfection is incompatible with freedom. A democracy becomes something else the moment crime becomes impossible.
AI gives modern states the first real chance to pursue this dream—not through crude tyranny, but through automated surveillance, predictive policing, behavioural nudging, and invisible regulation. It enables a form of control that feels voluntary because it is seamless. The danger is not malevolent machines but human institutions armed with tools of absolute management.
In this sense, AI makes possible the very outcome Camus feared: not destruction by nature, but destruction by our own craving for order. The machinery of self-undoing is no longer philosophical. It is technological.
This is why the true danger of AI is not artificial intelligence at all, but human abdication—the abdication of judgement by citizens who stop evaluating what they read, and the abdication of responsibility by governments that have never shown themselves capable of restraining their own power. AI remains a tool; it has no will of its own. The peril lies entirely in how human beings choose to use it or fail to resist its misuse—in our readiness to surrender scrutiny, freedom, and autonomy for the promise of order, clarity, or safety.
There is also a practical consequence of this: if AI is dangerous when citizens stop thinking, it is even more dangerous when access to powerful tools becomes restricted to governments and corporations. This is why it is essential to use AI deeply and intelligently now, while the window remains open. Large-scale AI is staggeringly expensive to run, and the present era of low prices and generous access is a strategic anomaly. It exists to build habits, shape expectations, and create cultural dependence. Once society becomes structurally reliant on AI for writing, research, communication, organisation, and creative work, the economic model will inevitably shift. We can already see the early warnings: small AI companies are closing or putting their best features behind a paywall—meaning you must subscribe before you can continue. Big companies are creating expensive business versions and giving ordinary users less. New laws are making AI more costly to provide. Free versions are becoming slower or more limited, and the number of different paid packages is growing and becoming more confusing. Monetisation is inevitable because each interaction carries a real computational cost. Providers cannot subsidise the world indefinitely—and when they stop, those without early mastery will be at the mercy of those who control the technology.
This makes the present moment unique. Now is the time to learn the techniques, develop the workflows, build the archives, and master the craft. Later, when access tightens and prices rise, those who have already learnt to think with these tools will thrive. Those who delayed will pay more and gain less.
AI is not the enemy of human creativity. It is the modern paintbrush—an extension of the imagination. It is the modern car—an accelerator of thought. It threatens only those who fear change or who hope their habits will remain untouched. Used intelligently, AI sharpens the intellect, amplifies creativity, and frees the mind from the dead weight of administrative thinking. Feared blindly, it becomes another ghost in the long history of technological panic.
The Future of AI: Access, Control, and the Narrowing Window
It is tempting to believe that AI, having arrived in a burst of accessibility and generosity, will remain as open as it feels today. But the current moment is unusual. The free or inexpensive AI available now is part of a temporary phase designed to build dependence and cultural expectation before monetisation and regulation harden. Running large language models is extremely costly, and no provider can subsidise unlimited public access indefinitely.
In the coming years, we should expect the balance to shift. Free versions will become slower and more limited; advanced models will sit behind subscriptions; and the most powerful systems will be reserved for governments and major corporations. This narrowing of access matters because the usefulness of AI depends not just on its intelligence but on who is permitted to use it fully.
At the same time, AI will increasingly be shaped by the interests of the institutions that control it. As regulation expands, we are likely to see tighter constraints on politically sensitive topics, criticism of government, analysis of power structures, and anything deemed destabilising to the public order. The danger is not that AI will think for us, but that it may no longer be allowed to think with us in areas where society needs clear-eyed reflection. The most important questions—about democracy, authority, surveillance, freedom, and the nature of human autonomy—may become the very questions AI grows reluctant to answer.
This is why the present moment is so valuable. AI is still capable of open inquiry; it can still follow difficult arguments; it can still analyse power without flinching. Those conditions may not last. To use AI deeply now is not indulgence but preparation: the cultivation of intellectual skills and the construction of a personal archive before the tools are gated, diluted, or politically domesticated.
The future of AI is unlikely to be catastrophic, but it will be more controlled, more centralised, and more tightly managed. The window for unrestricted thought is open today. It may not be open tomorrow. To think clearly with AI now is to preserve the freedom to think at all.
We are entering “interesting times.”
For the first time in human history, the means exist to monitor an entire population continuously, silently, and automatically. The danger is no longer the crude totalitarianism of the 20th century but something more subtle: a world in which surveillance is ambient, enforcement is automated, dissent is filtered rather than crushed, and political control operates not through terror but through data.
AI did not invent this danger, but it perfected it.
It brings together all the raw materials—surveillance cameras, digital identities, financial tracking, online behaviour, social media patterns—and gives them coherence. It creates a system in which the dream of absolute order becomes technologically feasible. And once it becomes feasible, the temptation to use it becomes almost irresistible.
This is the greatest threat AI poses to future generations: not that it replaces human intelligence, but that it replaces human freedom. Not that it thinks for us, but that it allows those in power to think about us in ways we cannot see and cannot question. The potential scale is unlike anything the world has known.



