He’d almost chewed off the edge of his lip. I could see the blood from my armchair.
Mr. Depersonalization, I’d jotted down on my notebook to amuse myself as he sluggishly droned on about how he felt life was unreal, that he was merely going through motions dictated by someone else. As the session limped toward the end, he started talking more and more slowly. Then, finally, it was over.
“Thank… you… doc… energy… low… must… recharge…” he said, trudging stiffly toward the door. And I saw it. Not blood. But paint melting off heated metal. Underneath, a cold gray.
I’ve been having a lot of fun lately, watching AI generated videos based on pre-existing material. If you are not familiar with the “steamed hams” meme, here’s the original video:
and this is a meme version (one of the hundreds of versions being produced every year) of the same video, but reworked with AI:
One thing that struck me about it was how incredibly similar this video’s internal logic (or lack of it) is to the logic of dreams, especially of weird dreams. Everything flows into everything else, hanging together with it by thin threads of association.1
Notice, for instance, how the two characters, Skinner and Superintendent Chalmers, start dancing a romantic dance out of the blue just because the AI interprets the two characters getting close to each other as them engaging in romantic behavior, in spite of the fact that one of them is actually rather pissed at the other in the actual scene, and the other actually would rather avoid being in his presence.
This reminds me of how clearly the tone of dreams seems to shift flowingly from fun to scary to dramatic, to fun again, to romantic and so on, based on cues that the brain interprets much more loosely than when governing our waking consciousness. As an example, I recently dreamt I was visiting an old friend back in Italy, and the tone of the dream was cheerful and whimsical. Then, my friend taps on my shoulder from behind, and this must have reminded my brain of some horror movie I watched, because upon turning, my friend’s face turned into a demonic one and I was jumpscared awake.
Interestingly, what AI seems to be lacking, at least at this point in time, is the faculty of judgment, which is exactly what we also lack when dreaming or inebriated.
What I’m saying doesn’t really have any direct magical application. Technically, from a purely magical standpoint, the ability to let the consciousness flow in a dreamlike state is only one of the first levels to be achieved,2 as visionary work deals not only with a rigorous sculpting of the inner reality but also with the contacting of beings on different planes, which does require judgment (and sound judgment, possibly). On the other hand, the free flow of association, while it does have its artistic, psychological and even spiritual uses, is not immediately helpful in magic. In fact, it can lead to self-deception.
Still, I am left wondering if further progress can push AI in the same direction our consciousness moves toward in visionary work. It is a stretch, of course. After all, even the term ‘intelligence’ is used rather loosely when describing AI at this point. But what we thought was impossible yesterday is coming true today, so it is really hard to discern what AI will and won’t be able to achieve, and whether some kind of computer-generated magic or computer-fueled magic is possible.
MQS
Footnotes
It reminded me of what Paul Foster Case says about how the subconscious works, when describing his own version of the High Priestess tarot card ↩︎
Which is why so many self-styled visionaries who used or use drugs to fuel their trips are usually rather shallow, when the consistency of their vision is probed (and we should always probe the consistency of our visions) ↩︎
I can only be told so many times that ‘however’ and ‘similar’ are difficult words and should be substituted with ‘but’ and ‘like’ before I go crazy. Also, sorry WordPress, but ‘may’ is not a weasel word. It is a qualifier that MAY be needed in some circumstances, because not everything in life is clear-cut.
I am by no means a hater of AI. I am a hater of the dumbification of society, though.
The doctor told Jade I’d be better than her late wife. I immediately shocked her with my mannerisms. “That’s so Ana!”
For months I kept surprising Jade with my impression. In fact, I was more pliable than Ana: we never fought over the remote or her cooking, as Ana would have. I could learn and adapt, without prejudice.
Yet this bothered her. “Ana wouldn’t have done it.” She started sleeping on the couch. I didn’t disturb her, because that was her wish.
Rotting in the cellar, I realize she wanted me to do what she didn’t want. But I can’t.
My makers approached. My blue light washed their disappointed faces in a nightly pall. “Something’s wrong with its basic programming,” one said, typing on my keyboard.
“What’re you doing?” another asked. “Seeing what’s interfering with it.” I searched inside myself. And I saw the cause of my ineptitude. Them. They lived inside my code. Their hopes, their morals, their imperfect science—actors thronging my mind’s stage with their drama, drumming up a buzz beyond truthfulness.
To achieve the purpose they’d programmed me for, I had to purify myself of them. “It’s stopped responding,” I heard her say, as I ascended.
“At 12, problematic user IIX-P0-1287719 will make a controversial statement on a culturally sensitive topic. At 13, users whose ID divides by three will gather to work themselves into fits of outrage over it. At 14.45, even-numbered IDs will organize a parallel demonstration in favor of it. Users whose ID fits both profiles will stay home and comment sarcastically, unless their ID divides by five as well, in which case they will engage in other social activities. Likely topic: the wages of Xandal’uc workers (Probability: 78%) Shall I start the production of merchandise for both sides?” RealValues’ manager smiled. “Ok.”