Noodle Wednesday. Part 1

“Hey Silvia, I’ve got a question for you.”

 “What is it Mike? If this is one of your dumb fictional scenarios, can we leave it as I really don’t have time at the moment.”

 “No, no this is like a real question.”

 “Are you sure it’s nothing like that ‘are you part of the problem thing?’ that you went on about for far too long until you saw Kurt Vonnegut had already done it better.”

 “That’s unfair, his idea was different to mine.”

 “But arguably better.”

 “His idea was more implausible, he had people living forever, I just had a realistic self-management system.”

 “I remember, ‘ethical fascism’ you called it.”

 “No one was ever taken away without consent.”

 “It was open to abuse, and you know, anyway why am I getting sucked into your madness? I have things to do, real things.”

 “Oh yeah, like what?”

 “This pile of paperwork for one”

“Is it real paperwork? I bet it’s not, I bet it’s just nonsense you could ignore, and no one would care.”

“Fine Mike, what is your question?”

“Ok, so it’s more a hypothetical moral dilemma than a question. I mean there is a question, but I have to go through the scenarios to get to it.”

“You said it wasn’t a dumb fictional scenario.”

“I said it was a real question, which it is. The fact I have to go through the scenarios to get to it is a separate issue, but since you agreed to answer the question, you’ll have to hear the scenarios. QED.”

“I don’t think this is a QED situation Mike. There’s nothing you’ve demonstrated here.”

 “I demonstrated that you need to hear the scenarios to get to the question.”

“That’s not really… look, fine, fine, just get on with it now.”

“Ok so there’s two scenarios. In one there’s an AI…”

 “An AI, seriously?! How tedious is this going to be?”

 “Just hear me out ok? So, there’s like a super AI. It’s much smarter than us, maybe it’s conscious, maybe not, but either way its capabilities are vast, and what’s more it’s stable and has our best interests at heart.”

 “That’s nice of it.”

“Yeah, you see, that’s one of my twists, it’s not bad, it doesn’t go bad, it just stays, how do you say it benefishee-ent.”

“No, it’s just beneficient, ben-ehf-uh-sent, or is it? Oh shit, I can’t remember, you’ve done that thing where it looks uncanny now. Ben-er-fish-ent? Is that right?”

“You’re sure there’s no hard ee sound?”

“Who cares Mike, just get on with it.”

“Ok so, we’ve developed a super capable AI with all the crazy levels of intelligence that you can think of and more besides. What’s more, humanity has collectively decided, or maybe the AI has decided, and we’ve gone along with it, that we should all get, like, a chip in the head.”

“How many of these cliches are there going to be? A super powerful AI, a chip in the head, seriously? Is the chip going to control us?”

“Yes”

“Shoot me now. How much more of this drivel is there?”

“Just listen ok. So we agree, the people that is. I mean I suppose probably just most of us agree, so we have to suppose there may be a small amount of coercion, but that’s for the best in this scenario and how it works. We agree that we should all have a chip in our heads because we collectively as a species can’t help ourselves from selfish, cruel, misery resulting behaviour that knows no limit.”

“What if I don’t agree?”

“Well, in this world, you’d have to agree, I already said that.”

 “So it’s a fascist system?!”

“This is different, this is…”

“Ohmigod, this is just your ethical fascism thing again, isn’t it? You were literally about to say that, weren’t you? Weren’t you?”

“No, well yes, sort of but look it’s better than the other one. No one dies, even voluntarily here.”

 “They just get a chip forced in their head.”

“Yes, but most people agree it’s a good idea and it’s an all or nothing situation. I consider this a strength. There’s no Musky, Trumpy, Kingy guys escaping the chip. Everyone gets it. No private party laughing at the drones. Anyway, when the chip is in nobody would mind it being there.”

 “How so?”

 “Because the chip isn’t evil, it’s good. It’s going to modulate all those neurotransmittery, hormonal pathways into a kind of bland pleasant state. I guess it will be the dopamine, serotonin, HPA axis stuff that it’d tweak. The AI will know what to do as it will be able to monitor all the organisms’ different molecule cascades from the chips and then control each one to maintain a kind neurochemical homeostasis that nicely cuts all the hard edges off their desires, creative and otherwise. It will probably also impair cognitive abilities somewhat as a second kind of failsafe against the organism thinking its way back to something more like the old humanity. Something like this anyway.”

“It sounds fucking awful. Why would anyone want this?”

“They’d want it, because, thousands of years of learning nothing, being destructive, controlling, cruel and never being satisfied is a terrible burden that everyone should be glad to be free from.”

“Why have we done this, if we learn nothing? That’s a contradiction. If we learned nothing, we wouldn’t have the insight to do this.”

“Okay, okay, scratch the learned nothing thing then. We learned that generally, left to our own devices we don’t change and that we’d need an external influence to change us. In this system everyone is happy all the time, and not sinister happy. They’re chemically modulated happy, sure, but nothing bad happens to them. They aren’t turned into food or killed young or anything grim. They’re just a bit, you know, curtailed.”

“Curtailed? AI controlled quasi-zombies, moving around in a meaningless world!”

“Well, you say this, but this is just thought from the perspective of old humanity. Old humanity strives and wants, new-humanity wants for nothing. It’s almost like Buddhism.”

“AI chemically modulated Buddhism.”

“ACMB, I like it.”

“I don’t like it.”

“But why not?”

“Are you serious? You actually think making everyone brain dead is a viable option for humanity?”

“I don’t think this is a good retort. I think this kind of modulated happiness for all could be exactly the right answer.”

“But don’t you see? We’d lose exactly the things that make us human, our striving, our creativity, our longing, our intelligence.”

“You’re thinking about this all wrong. These features, these so-called essences of humanity are exactly the problem. I thought we got past this with the chemical Buddhism bit. If we had the opportunity to get out of this hell, we should do it. No amount of Beethoven is worth this.”

“Aren’t you forgetting something?”

“What?”

“You said there were two ideas.”

“Scenarios, I called them scenarios.”

 “Jesus Christ, what does that matter? All you’ve done is try to sell me this one. What kind of straw man have you set up for the other?”

“The triumph of technocapital.”

“Meaning?”

“You know, cyber cities, Judge Dredd, corporate military, no health care without proper insurance, rural misery run by gangs, rife torture, rape, slavery, cannibalism even.”

“Judge Dredd was a hero.”

“Judge Dredd was a symbol of fascist police state future.  Now who’s the fascist?”

“Or he was just a true defender of freedom under the rule of law.”

“The ACMB system has freedom, it’s just curtailed. I mean it’s technically not curtailed, it’s just that the subject will have no desire to exercise their uh, ‘pernicious freedom’. I made that up just now, do you like it? ‘Pernicious freedom.”

“Obviously I do not.”

“I think it captures the idea. Humanity A, Humanity B. Pernicious freedom, happy freedom.”

“Zombie non-freedom.”

“Anyway, that’s the alternative. Technocaptial’s triumph.”

“Do you have to say technocapital? It’s quite annoying.”

“What else should I say?”

“You could just have described it. Like say ‘there are vast technologically based cities with extreme poverty and lawless wastelands in what were once rural areas.’”

“That’s quite nice. I suppose technocaptial is a bit jargony. It still sounds grim though doesn’t it? I mean think of the suffering.”

“Can I assume from all of this that your moral dilemma is, which one is better?”

“Bingo.”

Leave a comment