When Plato tells the story of Theuth in the Phaedrus, the god offers his invention as a gift to humankind. King Thamus declines, with the warning that writing will “implant forgetfulness” and give only “the appearance of wisdom.” The common accusation against AI writing—that it weakens thought, produces imitation rather than understanding, and severs authorship from the living speaker—is the latest form of the same worry.

Derrida’s famous reading of the Phaedrus reframes Thamus’s fear. Writing is not simply a tool added to speech; it is a supplement, both addition and substitute. It appears to aid memory, but only because speech itself is already dependent on spacing, iteration, and deferral—the conditions Derrida names arche-writing. The supplement therefore exposes that the supposed origin (the speaking, remembering subject) was never self-sufficient. Writing does not corrupt presence; it reveals that presence is already trace.

From a neurological perspective, writing does of course literally re-wires the brain. It recruits visual and spatial circuits that oral culture used differently, redistributing the part of the labour of memory from the hippocampus to the page. In this sense, Plato’s complaint is empirically true: writing does change us. But the change is not necessarily degeneration—it can be seen as the exteriorization of the same operation that already structures memory internally. Derrida’s arche-writing here meets Clark and Chalmers’s “Extended Mind”: cognition and recollection extend into the environment through inscriptions that function as parts of the cognitive loop. The notebook, the screen, or the archive is not outside the mind but part of its system of traces.

What AI systems do is generalize this exteriorization. They no longer merely store traces; they process and generate them. The writing machine remembers, recombines, and returns language to us in new configurations. In functional terms it is another layer of the extended mind: a dynamic tertiary retention, in Stiegler’s phrase, that supplements human thought. As alphabetic writing once externalized static memory, AI writing externalizes and increases memory as process: it actively constructs what we call ideas. This extension into process suggests a greater difference than there may actually be. The same structure of the supplement recurs: the aid that threatens to replace, the prosthesis that transforms what it extends.

Each stage—speech, writing, AI—alters neural, social, and cultural patterns, yet none of these abolish the structure of arche-writing itself. The trace remains the constant; the embodiment of the trace shifts. The human, then, is not displaced by technology but continually re-inscribed by it. The history of media is the history of arche-writing writing itself through new substrates—from mouth, to hand, to code. The question is not whether AI will change us (it will) but how we will inhabit the new spacing it opens in the field of memory.

But this is too simple. The notion that the same phantasy or concern exists between speech to writing and writing to AI writing is valid, yet to reiterate Plato was empirically correct in a sense and likewise expressions of concern are likewise correct, because it will alter the human. The issue concerns what it is exactly we think a human is. From a materialist perspective there is little issue here; likewise from a Deleuzo-Guattarian perspective (which is not necessarily materialist) there is also a lack of problem here —humankind simply extends its becoming other possibilities.

This thinking more concerns the phenomenology of the human as it takes itself to be in an incoherent coherence as opposed to its deconstructed coherent incoherence. The incoherent coherence is that of a being of a certain autonomy, possessing its own thoughts and feelings. To place these outside of it have a sense that undermines its sovereign importance. This tension is what is felt (currently) and brings the AI anxiety; literally a threat to perceived human ontology.

There is one more issue, which arguably is more potent than the above. This is that Derrida actually misreads or at least flattens Plato. Derrida treats Plato’s notion of memory more as a cognitve function, but arguably Plato means by anamnesis something much more spiritual. If the Platonic memory is more akin to Bruno’s art of memory, then Plato warns against the loss of a channel further back into being in an unambiguously magickal form. Neural rewiring in this sense is ontologically more than simply a change of cognitive functioning. Likewise then, the more recent shift in which process itself becomes externalised, can be seen as yet more damaging still to this access. From that perspective, every exterior inscription—whether written or algorithmic—is a distraction from the inner act of remembering the Good. If Derrida and Clark show that thought is always already technical, Plato reminds us that it may also be more than technical: a form of recollection that no prosthesis can perform on our behalf.

Without an absolute moral register, we cannot privilege the inner motion or the outer motion. The problem is thus ethico-ontological: the choice concerns not only what we ought to do, but what we choose to be. Ethics comes into play here in the sense of a choice, where we must consider from various angles which one constitutes what we wish to be—the autonomous subject whose access to Being is internal and effortful, or the re-inscribed human whose becoming is always already mediated by the technical trace. The history of media is the history of this ongoing ethical negotiation over the very boundaries of the human self.

“Hey Silvia, I’ve got a question for you.”

 “What is it Mike? If this is one of your dumb fictional scenarios, can we leave it as I really don’t have time at the moment.”

 “No, no this is like a real question.”

 “Are you sure it’s nothing like that ‘are you part of the problem thing?’ that you went on about for far too long until you saw Kurt Vonnegut had already done it better.”

 “That’s unfair, his idea was different to mine.”

 “But arguably better.”

 “His idea was more implausible, he had people living forever, I just had a realistic self-management system.”

 “I remember, ‘ethical fascism’ you called it.”

 “No one was ever taken away without consent.”

 “It was open to abuse, and you know, anyway why am I getting sucked into your madness? I have things to do, real things.”

 “Oh yeah, like what?”

 “This pile of paperwork for one”

“Is it real paperwork? I bet it’s not, I bet it’s just nonsense you could ignore, and no one would care.”

“Fine Mike, what is your question?”

“Ok, so it’s more a hypothetical moral dilemma than a question. I mean there is a question, but I have to go through the scenarios to get to it.”

“You said it wasn’t a dumb fictional scenario.”

“I said it was a real question, which it is. The fact I have to go through the scenarios to get to it is a separate issue, but since you agreed to answer the question, you’ll have to hear the scenarios. QED.”

“I don’t think this is a QED situation Mike. There’s nothing you’ve demonstrated here.”

 “I demonstrated that you need to hear the scenarios to get to the question.”

“That’s not really… look, fine, fine, just get on with it now.”

“Ok so there’s two scenarios. In one there’s an AI…”

 “An AI, seriously?! How tedious is this going to be?”

 “Just hear me out ok? So, there’s like a super AI. It’s much smarter than us, maybe it’s conscious, maybe not, but either way its capabilities are vast, and what’s more it’s stable and has our best interests at heart.”

 “That’s nice of it.”

“Yeah, you see, that’s one of my twists, it’s not bad, it doesn’t go bad, it just stays, how do you say it benefishee-ent.”

“No, it’s just beneficient, ben-ehf-uh-sent, or is it? Oh shit, I can’t remember, you’ve done that thing where it looks uncanny now. Ben-er-fish-ent? Is that right?”

“You’re sure there’s no hard ee sound?”

“Who cares Mike, just get on with it.”

“Ok so, we’ve developed a super capable AI with all the crazy levels of intelligence that you can think of and more besides. What’s more, humanity has collectively decided, or maybe the AI has decided, and we’ve gone along with it, that we should all get, like, a chip in the head.”

“How many of these cliches are there going to be? A super powerful AI, a chip in the head, seriously? Is the chip going to control us?”

“Yes”

“Shoot me now. How much more of this drivel is there?”

“Just listen ok. So we agree, the people that is. I mean I suppose probably just most of us agree, so we have to suppose there may be a small amount of coercion, but that’s for the best in this scenario and how it works. We agree that we should all have a chip in our heads because we collectively as a species can’t help ourselves from selfish, cruel, misery resulting behaviour that knows no limit.”

“What if I don’t agree?”

“Well, in this world, you’d have to agree, I already said that.”

 “So it’s a fascist system?!”

“This is different, this is…”

“Ohmigod, this is just your ethical fascism thing again, isn’t it? You were literally about to say that, weren’t you? Weren’t you?”

“No, well yes, sort of but look it’s better than the other one. No one dies, even voluntarily here.”

 “They just get a chip forced in their head.”

“Yes, but most people agree it’s a good idea and it’s an all or nothing situation. I consider this a strength. There’s no Musky, Trumpy, Kingy guys escaping the chip. Everyone gets it. No private party laughing at the drones. Anyway, when the chip is in nobody would mind it being there.”

 “How so?”

 “Because the chip isn’t evil, it’s good. It’s going to modulate all those neurotransmittery, hormonal pathways into a kind of bland pleasant state. I guess it will be the dopamine, serotonin, HPA axis stuff that it’d tweak. The AI will know what to do as it will be able to monitor all the organisms’ different molecule cascades from the chips and then control each one to maintain a kind neurochemical homeostasis that nicely cuts all the hard edges off their desires, creative and otherwise. It will probably also impair cognitive abilities somewhat as a second kind of failsafe against the organism thinking its way back to something more like the old humanity. Something like this anyway.”

“It sounds fucking awful. Why would anyone want this?”

“They’d want it, because, thousands of years of learning nothing, being destructive, controlling, cruel and never being satisfied is a terrible burden that everyone should be glad to be free from.”

“Why have we done this, if we learn nothing? That’s a contradiction. If we learned nothing, we wouldn’t have the insight to do this.”

“Okay, okay, scratch the learned nothing thing then. We learned that generally, left to our own devices we don’t change and that we’d need an external influence to change us. In this system everyone is happy all the time, and not sinister happy. They’re chemically modulated happy, sure, but nothing bad happens to them. They aren’t turned into food or killed young or anything grim. They’re just a bit, you know, curtailed.”

“Curtailed? AI controlled quasi-zombies, moving around in a meaningless world!”

“Well, you say this, but this is just thought from the perspective of old humanity. Old humanity strives and wants, new-humanity wants for nothing. It’s almost like Buddhism.”

“AI chemically modulated Buddhism.”

“ACMB, I like it.”

“I don’t like it.”

“But why not?”

“Are you serious? You actually think making everyone brain dead is a viable option for humanity?”

“I don’t think this is a good retort. I think this kind of modulated happiness for all could be exactly the right answer.”

“But don’t you see? We’d lose exactly the things that make us human, our striving, our creativity, our longing, our intelligence.”

“You’re thinking about this all wrong. These features, these so-called essences of humanity are exactly the problem. I thought we got past this with the chemical Buddhism bit. If we had the opportunity to get out of this hell, we should do it. No amount of Beethoven is worth this.”

“Aren’t you forgetting something?”

“What?”

“You said there were two ideas.”

“Scenarios, I called them scenarios.”

 “Jesus Christ, what does that matter? All you’ve done is try to sell me this one. What kind of straw man have you set up for the other?”

“The triumph of technocapital.”

“Meaning?”

“You know, cyber cities, Judge Dredd, corporate military, no health care without proper insurance, rural misery run by gangs, rife torture, rape, slavery, cannibalism even.”

“Judge Dredd was a hero.”

“Judge Dredd was a symbol of fascist police state future.  Now who’s the fascist?”

“Or he was just a true defender of freedom under the rule of law.”

“The ACMB system has freedom, it’s just curtailed. I mean it’s technically not curtailed, it’s just that the subject will have no desire to exercise their uh, ‘pernicious freedom’. I made that up just now, do you like it? ‘Pernicious freedom.”

“Obviously I do not.”

“I think it captures the idea. Humanity A, Humanity B. Pernicious freedom, happy freedom.”

“Zombie non-freedom.”

“Anyway, that’s the alternative. Technocaptial’s triumph.”

“Do you have to say technocapital? It’s quite annoying.”

“What else should I say?”

“You could just have described it. Like say ‘there are vast technologically based cities with extreme poverty and lawless wastelands in what were once rural areas.’”

“That’s quite nice. I suppose technocaptial is a bit jargony. It still sounds grim though doesn’t it? I mean think of the suffering.”

“Can I assume from all of this that your moral dilemma is, which one is better?”

“Bingo.”