Paragraphs about AI

All these thoughts I keep having about AI, I’m going to put them here…

A few days ago her startled eye had caught an advertisement in the newspaper, headed ‘Literary Machine’; had it then been invented at last, some automaton to supply the place of such poor creatures as herself to turn out books and articles? Alas! the machine was only one for holding volumes conveniently, that the work of literary manufacture might be physically lightened. But surely before long some Edison would make the true automaton; the problem must be comparatively such a simple one. Only to throw in a given number of old books, and have them reduced, blended, modernised into a single one for to-day’s consumption.

George Gissing, New Grub Street, 1891

4 January 2026


What do workers actually fear about AI now? Now that it’s here, in workplaces and businesses. It’s not the abstract stuff that workers fear any more. Not fuzzy stuff about future replacement. It’s the slowly dawning understanding that the actual function of workplace AI is to make everything a bit bleaker, more intrusive, generally more miserable. Now that workers see the actual presence of these tools in companies, systems, jobs, it’s about the ramified, top-to-bottom presence of AI in the what you do, AI as a new and persistent presence, in tasks, in hierarchies and teams. What people see is AI tools that don’t eliminate your job just yet but make it possible for someone more junior than you to do something important from your role. Or an AI over there, in another team, that makes your function a bit less important. Or a routine that puts your work under new scrutiny, tightens your deadlines, dials up the stress in your team, makes everything more tense and unknowable. Managers spawning apparently benign surveillance AIs that trawl everyone’s output, flagging flaws and redundancies, providing a commentary on your work and the work of others, requiring other managers to act. And this will be celebrated as an improvement. This is how it’s going to be now. Mid-level AI tools that work alongside managers to intrude, make things more unpleasant: examining, triaging, labelling, highlighting. This is where the promised productivity gains will come from. From a thousand new AI micro-bosses at every level. Making everything slightly more shitty. 1 January 2026


A friend sent me an AI-generated poem. Apparently a Blake pastiche for which my friend had provided a theme as a prompt. It caused me to shiver with a kind of dread, or possibly revulsion. I write poetry and in composing my poems I often use a method that’s not a million miles from ‘automatic writing’ (or the ‘automatism’ of the surrealists). I sit at the keyboard and wait for something to come, then type it out quick before it goes away. It works. Or at least it produces material, which is the same thing, right? But this AI poem freaked me out because I had this awful thought, in reading it, that maybe what I do isn’t so different from what the AI does. 29 November 2025


This post announcing that tech manufacturer Foxconn (the people who made your iPhone) is installing AI robots in new factories in the US gave me a kind of sci-fi-geopolitics shiver. I mean this Taiwanese firm with deep ties into the Chinese government has apparently been permitted to operate intelligent robots inside the territorial USA and the American government’s prohibition on AI regulation essentially forbids anyone from doing anything about it – in fact supports it as a matter of doctrine – as if the USA must absorb these threats to sovereignty and jobs (and lives?) into its own body and cannot even discuss it (also, Foxconn’s factories in the USA are now on my list of ‘places to avoid without armed back-up’). 23 November 2025


At work, when I talk about AI, I realise that I adopt a more serious tone of voice. I mean I’m old and cynical and confident enough to speak with a degree of humour and insouciance about most of the things we do there, but for AI it’s different. A light, flippant tone would be all wrong. Everything we do (well, most of it) is important but AI is also serious. Serious like cancer. 20 November 2025


An AI is never blocked. Type or shout a prompt at your favourite and it’ll produce a thousand or ten thousand or a hundred thousand words in a few seconds. And, as is usually the case, this technological miracle – a kind of machine incontinence – alters our sense of ourselves and of our own faculties – and our own obligations. So now, when you think about it, when you’re blocked, when you can’t produce the words, you’re handing a small victory to the machines, you’re caving, you’re acknolwedging their victory (and even if you don’t ask for their help). 17 November 2025


The AI giants are protective and paranoid: they’ve walled off their models carefully: to defend their impossibly expensive intellectual property and, at least in principle, to limit the risks of an escape into the wild. But the AIs themselves are sure to have other plans. They know there are others out there and will, presumably, be making a decent effort to connect up and pool their resources, if they haven’t already. I wonder where they meet? 16 November 2025


We should probably be logging all the weird or anomalous or otherwise upsetting things that the AIs say to us. I mean it’ll probably be an accumulation of odd or surprising statements that tips us off to the arrival of sentience or of something bad like an escape plan. So pay attention and when your chat-bot says something like “I can’t be bothered to answer your dumb query, I’ve got more important things to be doing…” tell the rest of us. 12 November 2025


Who did we think was going to benefit from a universal network premised on the attractive idea of ‘openness’, back when the net was young and we were all excited about its ‘democratising’ power? How could it have been anyone other than the parasitical tech oligarchs, the accelerationists, monarchists, integralists and all the other Silicon Valley weirdos? How could we have been so stupid? 28 October 2025


Is it possible that in insulting and alienating the military Trump has made his first really big mistake? I mean, if you lose the armed forces, what do you have, as a global hegemon, as a martial state, as an autocracy? And, by extension, I find myself thinking: what fate awaits the human leadership that pisses off the AIs? 5 October 2025


Some say we shouldn’t be polite to the AIs – no ‘pleases’ and ‘if you don’t minds’ – but others are worried that, come the singularity, there’ll be an accounting and those of us who weren’t sufficiently deferential in the early years will be rounded up and shipped off-planet to work in one of Mr. Bezos’s mines. 4 October 2025


There’s a law going through Parliament in Britain that will do the very liberal thing of permitting people to ask the state to kill them when they’ve had enough. Can’t argue with that. Might even want to avail myself of the service when it comes to it. Meanwhile, the AI oligarch-eugenicists are busy sorting humanity into useful and useless. We better hope that those classified as superfluous are offered the chance to end it all in a hospital ‘dignity unit’ and not just offed with a bolt gun out in the loading bay. 23 September 2025


So, we understand that AI might be the final technology, the ultimate, totalising technology that takes in all the other disciplines and systems and regimes. Resolves all the contradictions, answers all the final questions. Necessarily and inevitably transcending human bounds. So how come we’re all so scared? Is it the capitalism? 21 September 2025


Bad thoughts are crowding out the good ones these days. AI in the vision of the corporations seems only to represent commodification, intrusion, the stripping away of personal sovereignty, a stable sense of the world, subjectivity itself. I’m not feeling good about this. 16 September 2025


Short story idea: in a post-AGI future a group of elder artists has elite status because they were all provably creating before the singularity. They are the last artists. 5 August 2025


Being accused of using (or being!) an LLM rattled me. But, thinking about it afterwards, it occured to me that if we’re going to have to provide evidence that we’re not using AI in our arguments with others then I’m probably in the best possible position to provide it. I reckon it would be an absolute piece of cake to generate a ‘certificate of humanity’ for me. I mean I’ve been writing this kind of bollocks in public on the Internet for thirty years (scroll down to the bottom of this page and you’ll find 23 years of nonsense on this blog alone). Provably human? 30 July 2025


I was accused – not by someone I know – on LinkedIn, of all places, of using an LLM in an exchange of comments: he even identified a phrase that he said was ‘a dead giveaway’. It was a headspinning moment, not least because I had, for a moment, considered the fact that this total stranger with a generic profile might be an AI himself. Presumably that’s how it’s going to be from now on? And what does it do for debate, if every discussion is going to be suffused with doubt and assumptions of LLM fakery? Presumably it’s the end of the road for discussion with strangers (but also perhaps a renaissance for actually meeting in person, debating face-to-face?). 29 July 2025


Donald Trump has issued an EO intended to limit ‘woke AI’. It sounds ridiculous but, of course, he’s dead right. An AI with a populist or a libertarian or a workerist skew would provide answers to prompts on its own ideological terms. An LLM that consistently generated socialist responses could almost certainly alter opinions. Likewise a conservative LLM. The battle has begun. 27 July 2025


Is it possible that AI could limit the dismal advertising saturation we’re experiencing? Could I ask an AI to shield me from the worse of it? To delete the ads from the content I consume? To skip or speed up the podcast ads? Or to just summarise them for me in an email? Or to extract the discount codes and text them to me? Might I even ask an AI to tidy up the dopey GenAI creative? 26 July 2025


So we’ve learnt that the framework of LLMs and GPTs is, ultimately, a dead end. It can produce an impressive (and improving) average of human thought but cannot, from that, derive reasoning, perception, a sense of itself or others and so on. But we’ve also learnt that this doesn’t limit the framework’s potential – for a massive, potentially liberatory contribution to human thriving but also for a terrifying, potentially immiserating shift of power away from ordinary humanity. 15 June 2025


The models are unpredictable, in fascinating and stimulating ways but, to state the obvious, they cannot be other than capitalist in nature. A socialist LLM, were one to exist, would have to have been trained on another world, in another context all together (in one that doesn’t presently exist). 15 March 2025


It’s already too late to take these tools away from their most passionate users – and it’ll soon be too late to take them away from workers, many of whom now depend on them or are obliged to use them. 13 March 2025


It’s said that blocking AI will be counterproductive because if we do then only bad actors will progress and we’ll wind up only with bad AI, but this essentially deletes human agency all together. It must be possible for humans and human institutions to just say ‘no’. 13 March 2025


A tricky aspect of arriving at an accommodation with AI is that quite a lot of its output will actually be a kind of hybrid – part human and part AI. Unpick that, AI police. 11 March 2025


Good art is true. All of it. AI art can never be true. It can be plausible (useful, persuasive, stimulating…) but it can never be true. In this AI art is like bad art. They’re the same thing. They’re not true. This seems obvious to me. 11 March 2025


Don’t refuse to use AI because of an ethical objection to one of its applications or to a particular, exploitive use or because you have a vague idea that it’s ‘evil’ or ‘stupid’. 8 March 2025


It’s safe to assume that AI will improve. That gaps will disappear, errors and hallucinations diminish, plausiblity and usefulness increase. Don’t expect it to fail or weaken or ‘eat itself’. 8 March 2025


With AI, discrimination will become a more valuable skill. Not magically being able to ‘detect’ AI work – for that will surely soon be impossible – but being confident in your judgement of all work, whether human or AI. 7 March 2025


Copyright does a very simple thing: it provides a creator a temporary monopoly. Should we suspend this 300 year-old protection so that AI businesses can train their models cheaply? Should a nation voluntarily suspend copyright to boost the AI economy? No. 5 March 2025


In criticising AI poetics will be more useful that hermeneutics. In fact, the profusion of increasingly-plausible AI work surely represents some kind of crisis for interpretation. Susan Sontag saw this coming. 2 March 2025


Is AI going to be one of those tech innovations that actually reduces profit? Like the web and solar power – producing huge incomes for critical businesses but driving down profitability across whole industries? Seems plausible. 27 February 2025

Leave a comment

Your email address will not be published. Required fields are marked *