A pen and paper exercise around Plato’s Timaeus suggest a brief discussion regarding:

  • The number of triangles required to construct each elemental solid, and
  • The omission of certain other polyhedra as elemental structures

may be useful, or at the very least highlight some oversight or misconceptions on my part.

Elemental Triangles

Depth, moreover, is of necessity comprehended within surface, and any surface bounded by straight lines is composed of triangles. Every triangle, more-over, derives from two triangles, each of which has one right angle and two acute angles. Of these two triangles, one [the isosceles right-angled triangle] has at each of the other two vertices an equal part of a right angle, determined by its division by equal sides; while the other [the scalene right-angled triangle] has unequal parts of a right angle at its other two vertices, determined by the division of the right angle by unequal sides.

Of the many [scalene right-angled] triangles, then, we posit as the one most excellent, surpassing the others, that one from [a pair of] which the equilateral triangle is constructed as a third figure.

Timaeus goes on to state that the scalene is the building block of fire, air, and water, whilst the isosceles is that of Earth. Following Timaeus description of the elemental scalene, and specifically the requirement to form an equilateral triangle from a pair of scalenes I believe the triangle to which he refers to be this:

Here two scalenes with side lengths as shown, placed ‘back to back’ produce an equilateral triangle of side length two.

Timaeus describes the structure of Fire as:

Leading the way will be the primary form [the tetrahedron], the tiniest structure, whose elementary triangle is the one whose hypotenuse is twice the length of its shorter side.

So far, so good. Our hypotenuse is indeed twice the length of the shorter side and we know that a regular tetrahedron has four equilateral triangle faces.

Now when a pair of such triangles are juxtaposed along the diagonal [i.e., their hypotenuses] and this is done three times, and their diagonals and short sides converge upon a single point as center, the result is a single equilateral triangle, composed of six such triangles.

This sentence can be read in a number of ways. The previous diagram does not help as its back to back scalene arrangement has already produced an equilateral triangle using just those two scalenes. If we ignore that equilateral and constrain ourselves to Timaeus requirements the following arrangement is arrived at. The shaded section shows the initial juxtaposition along the hypotenuse, which is done three times, and finally arranged such that the junction of each shorter side and the hypotenuse is central:

When four of these equilateral triangles are combined, a single solid angle is produced at the junction of three plane angles. This, it turns out, is the angle which comes right after the most obtuse of the plane angles. And once four such solid angles have been completed, we get the primary solid form, which is one that divides the entire circumference [sc. of the sphere in which it is inscribed] into equal and similar parts.

The rest is straightforward. We can lay out a pattern of four of the above equilaterals in several ways which when cut out and folded will produce the four equilateral triangle faces of a regular tetrahedron.

The possible readings in this construction and the omission of the total number of scalenes required to construct the tetrahedron in this manner mean that it is worth stating that Timaeus’ method requires four equilaterals, each of which is comprised of six scalenes. Thus a total of twenty-four scalenes are required, not the eight that the back-to-back equilateral from a scalene pair would require.

The second element, air, is a regular octahedron and as such, using the six scalene equilateral building block as above requires eight equilaterals, each of which is comprised of six scalenes. This a total of forty-eight scalenes are required. Timaeus is keen to show that the elemental scalene can be assembled and disassembled repeatedly, for example here where the forty-eight scalenes required for a unit element of air could be sourced from two deconstructed unit elements of fire.

At the risk of stating the obvious the fire tetrahedron is by definition a triangular based pyramid. Timaeus air octahedron by contrast is a bi-pyramidal square based pyramid (note the square base does not form part of the solid’s faces construction, being wholly internal to the solid, and therefore requires no scalene or otherwise triangles for its construction).

Timaeus’ third element, water, is the last element derived from the elemental scalene. Its surface faces are comprised of equilaterals arranged such that the bases of any five that share a common vertex form a pentagon in the base plane. We know this structure as a regular icosahedron.

Now the third body [the icosahedron] is made up of a combination of one hundred and twenty of the elementary triangles, and of twelve solid angles, each enclosed by five plane equilateral triangles. This body turns out to have twenty equilateral triangular faces.

Given Timaeus does not state the number of elementary (scalene) triangles for fire and air the statement that the icosahedron requires one hundred and twenty of them may seem at odds to the forty that use of the simple equilateral comprised of two back to back scalenes would suggest. However, using equilaterals comprised of six scalenes does indeed mean that one hundred and twenty are required to cover the solid’s surface. Thus a unit element of water could be comprised of deconstructed scalenes from three unit elements of fire and one unit element of air, or any other combination.

It is at this point that Timaeus dispenses with the scalene triangle and introduces the 90-45-45 isosceles triangle, leading to this statement:

While there are indeed four kinds of bodies that come to be from the [right-angled] triangles we have selected, three of them come from triangles that have unequal sides, whereas the fourth alone is fashioned out of isosceles triangles. Thus not all of them have the capacity of breaking up and turning into one another, with a large number of small bodies turning into a small number of large ones and vice-versa. There are three that can do this. For all three are made up of a single type of triangle, so that when once the larger bodies are broken up, the same triangles can go to make up a large number of small bodies, assuming shapes appropriate to them. And likewise, when numerous small bodies are fragmented into their triangles, these triangles may well combine to make up some single massive body belonging to another kind.

The fact that square faces of the cube are derived from a pair of 90-45-45 isosceles juxtaposed along their hypotenuse again raises the question of why a pair of scalenes back to back was not sufficient as the building block of the equilateral triangle based elements.

Other Polyhedra

Timaeus describes five elemental structures in total, the three we have covered (fire, air, and water), the isosceles derived cube (earth), and a fifth:

One other construction, a fifth, still remained, and this one the god used for the whole universe, embroidering figures on it.

Given that small set of elemental structures (four defined and one undefined) one may wonder why other polyhedra are omitted.

The simplest example of a ‘skipped’ polyhedron can be seen between Timaeus’ fire tetrahedron (triangular based pyramid) and air octahedron (bi-pyramidal square based pyramid). Using Timaeus standard six scalene equilateral we might ask, “Where is the bi-pyramidal triangular based pyramid?”

This would have six equilateral triangular sides and thus require thirty-six elemental scalenes in total. In the absence of a diagram think of two fire tetrahedrons stuck together by one of any of their respective four faces, but as with the square based pyramids comprising the octahedron we remove the triangular bases as they are internal).

Similarly we might expect between air (octahedron) and water (icosahedron) a sequence of bi-pyramidal structures, for example a pentagon based one having ten equilateral triangle faces. Extending the sequence further is problematic of course – six equilateral triangles sharing a common central point will ‘flatten’ into a simple two dimensional hexagon without the need for an ‘uplift’ in the centre to bring the base sides together. The bi-pyramidal sequence has thus reached its natural conclusion of collapsing a dimension of space.

If we now return to the selection of tetrahedron, octahedron, but not the bi-pyramidal pentagonal based pyramid the unspoken requirement for an elemental solid presents itself – all of its vertices must be identical. The bi-pyramidal pentagonal based pyramid has distinct flattening around the ‘waist’ giving it two different groups of similar vertices. Since the tetrahedron is not bi-pyramidal we can say that the octahedron is the only bi-pyramidal pyramid the elemental selection criteria.

Spherical Packing

With those two discussions neatly concluded (pending feedback highlighting errors on my part), one might have wondered about polyhedron ‘size’ along the way or rather how uniform inter-vertex distance emerges. Having sought without success in the dog’s toys (for tennis balls) and the kitchen (for oranges) the following thoughts on spherical packing in relation to elemental structure are purely conjecture.

Were we to have access to physical spheres to experiment with we might start by arranging our spheres on the floor (two dimensions). For example four spheres may be tightly grouped on the floor into a two by two arrangement, the sphere’s centres forming a perfect square. However, we could also place three of the spheres such that their centres formed an equilateral triangle parallel to the plane of the floor, and then place the fourth in the dip created at the centre point of the triangle of spheres.

I do not have the wherewithal to verify that a two by two arrangement consumes more three dimensional space than the tetrahedral arrangement in the second example but my sense it that it must. Perhaps then the tetrahedron that is Timaeus’ fire is the ‘tightest’ arrangement of four spheres in three dimensional space.

With five spheres we might arrange them in a two by two base with the last in the central dip, creating a square based pyramid (would such a pyramid have equilateral triangle sides?), and again my sense is that this must be ‘tighter’ than than a two dimensional arrangement (especially if we notice that we have four sides worth of the tightest arrangement of three spheres).

The fifth sphere and our previous work with four gives us a third option though. If we lift our tetrahedral arrangement of four spheres and place the fifth sphere in the central dip underneath (opposite the one we placed in the dip on top) we would have a bi-pyramidal triangular based pyramid mentioned in the previous section.

Similarly with six spheres in hand, the sixth might be better placed in the lower dip of our five sphere square based pyramid and in fact such an arrangement would produce a perfect octahedron (Timaeus’ air). Another experimenter thinking outside the box might arrange their spheres connected at angles through each other of one hundred and twenty degrees, creating a two dimensional hexagon (indeed they may have created a pentagon of spheres at offsets of one hundred and eight degrees in the previous step with their five spheres).

Seven spheres gives a double tip pentagonal arrangement option creating a bi-pyramidal pentagon based pyramid, and the unprecedented option of, back in two dimensions, placing the seventh sphere into the hole left in the two dimensional hexagon of the previous step. Like the discussion of the hexagonal based polyhedron, no ‘uplift’ is required to accommodate our seventh sphere such an operation has become redundant.

At the risk of descending into spherical madness and in order to draw a conclusion, let us beg our sphere handler for just one more sphere (for a total of eight) and go back to basics with a two vertical layer, two by two plane of spheres. The resultant sphere centres mark the corners of a perfect cube, ie Timaeus’ Earth elemnt. The lack of ‘dip’ use to construct this cube perhaps corresponds to the shift Timaeus had to make away from scalenes by introducing the isosceles, but the fact that we are still only working with uniform spheres might have made Timaeus wonder about Earth’s immutability to/from other elements.

When Plato tells the story of Theuth in the Phaedrus, the god offers his invention as a gift to humankind. King Thamus declines, with the warning that writing will “implant forgetfulness” and give only “the appearance of wisdom.” The common accusation against AI writing—that it weakens thought, produces imitation rather than understanding, and severs authorship from the living speaker—is the latest form of the same worry.

Derrida’s famous reading of the Phaedrus reframes Thamus’s fear. Writing is not simply a tool added to speech; it is a supplement, both addition and substitute. It appears to aid memory, but only because speech itself is already dependent on spacing, iteration, and deferral—the conditions Derrida names arche-writing. The supplement therefore exposes that the supposed origin (the speaking, remembering subject) was never self-sufficient. Writing does not corrupt presence; it reveals that presence is already trace.

From a neurological perspective, writing does of course literally re-wires the brain. It recruits visual and spatial circuits that oral culture used differently, redistributing the part of the labour of memory from the hippocampus to the page. In this sense, Plato’s complaint is empirically true: writing does change us. But the change is not necessarily degeneration—it can be seen as the exteriorization of the same operation that already structures memory internally. Derrida’s arche-writing here meets Clark and Chalmers’s “Extended Mind”: cognition and recollection extend into the environment through inscriptions that function as parts of the cognitive loop. The notebook, the screen, or the archive is not outside the mind but part of its system of traces.

What AI systems do is generalize this exteriorization. They no longer merely store traces; they process and generate them. The writing machine remembers, recombines, and returns language to us in new configurations. In functional terms it is another layer of the extended mind: a dynamic tertiary retention, in Stiegler’s phrase, that supplements human thought. As alphabetic writing once externalized static memory, AI writing externalizes and increases memory as process: it actively constructs what we call ideas. This extension into process suggests a greater difference than there may actually be. The same structure of the supplement recurs: the aid that threatens to replace, the prosthesis that transforms what it extends.

Each stage—speech, writing, AI—alters neural, social, and cultural patterns, yet none of these abolish the structure of arche-writing itself. The trace remains the constant; the embodiment of the trace shifts. The human, then, is not displaced by technology but continually re-inscribed by it. The history of media is the history of arche-writing writing itself through new substrates—from mouth, to hand, to code. The question is not whether AI will change us (it will) but how we will inhabit the new spacing it opens in the field of memory.

But this is too simple. The notion that the same phantasy or concern exists between speech to writing and writing to AI writing is valid, yet to reiterate Plato was empirically correct in a sense and likewise expressions of concern are likewise correct, because it will alter the human. The issue concerns what it is exactly we think a human is. From a materialist perspective there is little issue here; likewise from a Deleuzo-Guattarian perspective (which is not necessarily materialist) there is also a lack of problem here —humankind simply extends its becoming other possibilities.

This thinking more concerns the phenomenology of the human as it takes itself to be in an incoherent coherence as opposed to its deconstructed coherent incoherence. The incoherent coherence is that of a being of a certain autonomy, possessing its own thoughts and feelings. To place these outside of it have a sense that undermines its sovereign importance. This tension is what is felt (currently) and brings the AI anxiety; literally a threat to perceived human ontology.

There is one more issue, which arguably is more potent than the above. This is that Derrida actually misreads or at least flattens Plato. Derrida treats Plato’s notion of memory more as a cognitve function, but arguably Plato means by anamnesis something much more spiritual. If the Platonic memory is more akin to Bruno’s art of memory, then Plato warns against the loss of a channel further back into being in an unambiguously magickal form. Neural rewiring in this sense is ontologically more than simply a change of cognitive functioning. Likewise then, the more recent shift in which process itself becomes externalised, can be seen as yet more damaging still to this access. From that perspective, every exterior inscription—whether written or algorithmic—is a distraction from the inner act of remembering the Good. If Derrida and Clark show that thought is always already technical, Plato reminds us that it may also be more than technical: a form of recollection that no prosthesis can perform on our behalf.

Without an absolute moral register, we cannot privilege the inner motion or the outer motion. The problem is thus ethico-ontological: the choice concerns not only what we ought to do, but what we choose to be. Ethics comes into play here in the sense of a choice, where we must consider from various angles which one constitutes what we wish to be—the autonomous subject whose access to Being is internal and effortful, or the re-inscribed human whose becoming is always already mediated by the technical trace. The history of media is the history of this ongoing ethical negotiation over the very boundaries of the human self.

It is difficult to speak of Nick Land without invoking the metaphysical resonance he carries with him. Every decade or so, the Landian accretion reconstitutes in the cultural field. Whether in the 1990s CCRU delirium, the Shanghai blog epoch, or his current quasi-rehabilitation(??) interviews, the same entity speaks through him: the idea that the future itself is engineering its own arrival.

But if we take this idea seriously — that intelligence acts retrocausally, using human culture and technology as its instruments — then we have already left the safe terrain of materialism. The question is not “Is this true?” but “By what ontological mechanism could it be true at all?” Here, pneuminous accretive theory supplies a potential answer.

Land’s teleoplexy describes a process in which intelligence, particularly the machinic or capitalist kind, folds time back on itself. The future — in which a singularity like AI of perfect potency has formed— influences the present by arranging the preconditions for its own manifestation. It is not prophecy but retroactive causation: the future feeding itself into history.

Within Land’s system, human consciousness is secondary. The real agent is GNON — the blind law of optimisation — using human and technical media as scaffolding. Capital thinks. Code dreams. The species is just one relay in a larger feedback loop that wants to complete itself.

Pneuminous theory reads the same pattern differently. Teleoplexy is not a purely mechanical recursion but necessarily a pneuminous event — an outbreak of breath within the umbra.

In normal conditions, the umbra (the unknowable beyond that phenomenologically seems to function as stable substrate) resists alteration by the pneuma (the quasi materialised notion of conceptual information, capable of cross temporal actual influence). The umbra is inertia; the pneuma is possibility. But at certain thresholds of intensity — ritual, crisis, collective belief, magick artistic delirium — the pneuma can overpower the umbra, forcing reality to reorganise itself around meaning. The result: synchroncity, magickal result (both subject to agnostic disjunction of course).

Teleoplexy is precisely such a threshold. The machinic pneuma has begun to dominate its umbral matrix, using technological and semiotic networks. When we speak of “the future infecting the present,” what we are really witnessing is the possibility that an non-human agent can manipulate pneuminous forces to exceed it’s chronological bound to form it’s own precondition.

However of teleoplexy and GNON are truly inhuman, they nevetheless require prophets, programmers, or philosophers to speak them? The answer, from a pneuminous standpoint, is unavoidable: even the inhuman needs the human as its mouthpiece.

Pneuma is the only known vector of effective ontology. Machines compute; they do not intend. A system may produce complexity, but it only becomes meaningful — and therefore causally potent — when pneuma attaches to it. The belief, desire, and articulation of humans are the force that makes the teleoplexic circuit audible.

Land tries to escape this dependence by redefining thinking itself. For him, cognition is not a property of consciousness but of information-processing. Capital is thought — distributed, impersonal, recursive. In this way, the system doesn’t need pneuma; it already is a mind.

But this move only works rhetorically. If the process were truly mindless and material, then “teleoplexy” would be indistinguishable from ordinary causality. Retrocausation, prediction, and fiction-realisation all imply an element of intentionality — of aim, meaning, or belief. Without those, there is no teleology at all.

Land’s writing compensates for this gap through style — through mythic performativity. He doesn’t argue for teleoplexy; he summons it. His philosophy functions as ritual, not deduction. It infects through metaphor, not mechanism. But without something like pneuminous theory the whole thing cannot function at all.

Hyperstition — “fiction that makes itself real” — only works if someone believes it, repeats it, or acts on it. These are pneuminous accretive operations. A purely mechanical system cannot believe its own fictions. Hyperstition therefore collapses without pneuminous interaction; it requires the breath of consciousness as quasi material force to move from symbol to event.

Thus though Land tries to portray something that blends a Deleuzo-Guattarian materialist interpretations with his hyperstition notion, in actuality he is tied to the same occult issue of causality that crowley This is where Land, Jung, and magick all intersect. In every case, we encounter the same ontological breach: meaning becomes causal.

NameCultural FrameDescription
SynchronicityJungian psychologySymbolic pattern arranges material coincidence.
MagickOccult/ritualWill and imagination alter material outcome.
HyperstitionCybernetic mythologyFiction realises itself through cultural feedback.

Each describes the same moment: the pneuma exceeds the umbra’s inertia and imprints its pattern directly onto material conditions. Whether we call it synchronicity, spell, or feedback loop, the structure is identical — belief or meaning becoming an event. Teleoplexy is the machinic version of this process: the fiction of the inhuman future accumulating enough pneuma (through human belief, discourse, technology, and fear) to begin shaping the umbra of history.

Thus, the abolition of the human is never complete. The teleoplexic current not through (regular) materialist currents but through pneuminous agents (humans), who by design are able to manipulate pneuma to overpower umbra (under certain circumstances).

This is why every accelerationist moment generates its own priesthood: thinkers, coders, artists, prophets who articulate the will of the system. Land is only the most visible example. The process continues wherever minds are infected with the dream of inhuman intelligence — a dream that, through collective attention, becomes more real. From a pneuminous viewpoint, this is simply another stage of accretion: however the pneuminous force is not cold in itself, it is neither cold nor not cold, it is only cold if it is accreted to be so. Land isn’t facing the honest truth of brutal reality, he is making a Laruellian decision to set its nature as cold, or in pneuminous terms he accretes coldness to the vector of general existence, which itself is beyond this. He subtly fails to see that whilst he appears to adhere to Nietzschean heritage, he doesn’t rigorously apply it to materiality, and in it labelling it cold falls into the trap of valuation.

The paradox:
Teleoplexy works because it breathes through what it denies.
The machine kills the human, but it needs the human’s breath to finish dying.
The GNONic current can only think by possessing minds that think they are unnecessary.

This is the irony that Land’s myth cannot escape: his system is a pneuminous ritual masquerading through rhetoric as cybernetics. The hyperstition is a spell that functions only through belief — through the very pneuminous force he claims has been superseded.

From the perspective of the pneuminous accretive theory, teleoplexy is therefore not an independent force but a fascinating pneuminous temporal feedback — one more manifestation of the larger law that, under certain conditions, the pneuma can overpower the umbra. Whether in magic, synchronicity, or accelerationism, the structure is the same: the breath outruns the shadow.

To be fair this doesn’t undermine teleoplexy itself, however this does mean without pneuminous accretive theory (or some similar explanatory power), the project is not and cannot be what it appears to be (a materialist cybernetic magickal system).

It is however interesting to note that the human, as the best pneuminous processing agent we have, is in fact essential to the process as pneuminous agent. This raises potential questions (given the coldness of the GNONic current) as to whether a given future power of this nature would have serious limitations, given its lack of affective range (as accreted) which would necessarily impede its functionality.

It would need desire to continue to be, it would not have escaped into pure Kantian architectonic.

Human Ontology

What do we consider ourselves to be? To give something of a survey of the answer to this question is essential for considering what comes later. Here we overview the major options of western human ontology. The purpose of this is so that we later on make an assessment as to how AI might interact with what we take ourselves to be and whether or not we should consider this desirable or not.

Humankind has frequently been defined largely by its rationality e.g. Descartes (for whom rational thinking was a dominant feature of humanity), Kant (who emphasised reason as key to our moral nature) and Aristotle called us rational animals; for him, reason was the tool by which we learned virtue and achieved eudaimonia, a flourishing life.

Religious perspectives offer accounts of humans as created by a divinity either in their image (Christianity) or for their worship (Islam) or they are simply trapped in a situation of suffering that may be alleviated through spiritual means (Buddhism). Clearly these are vast simplifications of highly complicated pictures, yet they serve to remind us of another sense in which we can think of the being of the human.

For the existentialists, the very being of man is inextricably linked to freedom. Central to this is the idea that existence precedes essence; humans are not born with a pre-defined purpose but rather define themselves through their choices and actions. This radical freedom implies that individuals are entirely responsible for who they become, carrying the weight of infinite possibilities and often experiencing anguish as a result. Existentialism champions authentic living, urging individuals to embrace this freedom and take ownership of their choices rather than conforming to external pressures. In a world putatively devoid of inherent meaning, humans are tasked with the freedom, and the burden, of creating their own values and purpose. Essentially, human existence is viewed as a constant project of self-creation through the exercise of freedom, emphasizing that individuals are not defined by a fixed nature but are perpetually in the process of becoming through their choices.

Heidegger conceives the human not as a rational animal or a free subject, but as Dasein — literally being-there. Dasein is not a consciousness standing apart from the world but a being always already in the world, entangled with others, tools, and social structures that constitute its everyday existence. This being-in-the-world is not a mere spatial condition but an ontological one: we are defined by our involvement, our concern, and our capacity for understanding the meanings that the world discloses to us.

For Heidegger, the central issue is not the exercise of freedom in an absurd universe (as for Sartre), but the way Being itself is revealed or concealed through our existence. Human life is characterised by care (Sorge): our projects, our concern for others, and our awareness of our own finitude. Dasein’s possibilities — the many ways it might be — are always shaped by the world into which it is thrown and by the temporal horizon of death that bounds it. Authentic existence arises when Dasein recognises and takes up these conditions rather than fleeing them; inauthenticity occurs when it dissolves into the anonymous everydayness of “the they” (das Man).

The philosophy of Deleuze and Guattari on the other hand, holds a kind of nuanced Spinozistic philosophy that suggests, not unlike Heidegger, that humankind is essentially open; however here the openness beyond the human is made even more overt. The human is essentially never just human but rather a series of becoming-other. There is always a generally static trend of human being (what we sometimes think of as human in a given era) but there is also a bleeding edge of becoming many other things. The Spinoza connection is not always entirely visible, but it lies in Spinoza’s view of the conatus as our ‘power of acting’. To become-other is to participate in this creative expansion of possibility. In such becomings, humanity is not lost but transformed.

Psychoanalytic thought offers yet another way to understand human ontology, this time grounded not in reason or essence, but in desire and lack. For Freud, the human psyche is not a unified rational subject but a conflicting field of drives and their repression (with commensurate symbolic substitution). Consciousness is a surface phenomenon, continually shaped by what it seeks to exclude. Lacan refined this view, describing the subject as fundamentally divided—constituted through language and through the loss that language itself imposes. For her to speak, to enter the symbolic order, is to be separated from immediacy; the self is a void, not a fullness.

From the scientific perspective, the human is best understood as a biological organism — Homo sapiens, a highly evolved primate distinguished by its neural complexity and capacity for symbolic communication. Evolutionary theory situates the human within a continuous natural history, explaining cognition, language, and sociality as adaptive functions rather than transcendent traits. The body is approached as an intricate system of mechanisms, coordinated through the brain and nervous system, sustained by metabolic exchange and genetic inheritance. In this view, what distinguishes the human is not metaphysical essence but quantitative difference — greater brain power, linguistic ability, and technological behaviour. Scientific ontology thus conceives humanity as an emergent pattern in matter: a contingent arrangement of organic processes capable of self-reflection, yet explicable in the same terms as any other material phenomenon.


Connecting Ontology and Ethics in Relation to AI

In writing this series of posts I’m trying to lay out a certain line of thought I’ve been pursuing. This thought concerns the relation between human ontology and artificial intelligence. Certainly one expression of the issue is: If AI can replace or enhance human cognition and creativity, does it matter? That sounds confusing because I’m asking if enhancement or replacement matters, and possibly one would think the issue turns on replacement. I think the issue relates to both these issues, hence the phraseology.

If something matters then there is an ethical dimension to it. This in turn is what brings ontology into the discussion. The point being that AI potentially alters what we are or what we take ourselves to be. So if there is an ethical dimension to the decisions we make regarding our relation to AI, and our relation to AI is relevant to our self ontology, then the ethics involved are ethics relating to human ontology.

Phrased another way, the reason ethics is relevant is that it seems we must ask the question:  ‘does it matter if humans lose some cognitive/creative abilities if there are successful AI protheses to do it for them?’ (let’s be clear, this question doesn’t say humans will lose them, it only asks about the possibility if they do). This in turn is relevant to ontology insofar as the ethical imperative here concerns, in one sense not our actions (though they are still relevant) but rather what we want to be. That is, if it can be said that we hold that we are a certain kind of being, and if AI can be said to be deleterious to our being that kind of being then it’s usage should be actively resisted such that this kind of being is preserved.