AI is Not God

On the real cause of "AI Psychosis".

AI is Not God

Believe The Hype!

When I was a teenager, I wore a Fitbit nearly 24/7. The slender wristwatch-like device tracked how many steps I took in a day, how much REM sleep I got at night and, if prompted, even how intense my workouts were. Eventually, with the goal of weight loss in mind, I started tracking how many calories I ate in a day using an app that could sync with Fitbit’s biometric data. The output of all this tracking was an array of statistics and graphs that, I thought, summed up my overall health, from my average heart rate over time to bar graphs comparing “calories in” to “calories out”.

The early 2010s was the advent of what was called the “Quantified Self” movement, whose core idea was that if you collected enough health data about yourself, then one day you could use it for…something. If you tracked things like your heart rate, blood pressure and oxygen levels, your mood, your golf swing, and a bunch of other numbers, then you could build a comprehensive picture of “you” which could be applied to do…something. Maybe you could look at your steps over many months, identify that you’re lazier on Saturdays, and implement a new workout routine accordingly. Maybe one day, if computers got smart enough, you could even collate this data and show it to a healthcare provider or very smart computer algorithm, who could then give you steps on what sorts of treatments you should undergo.

QS was seen as empowering, especially for women and other groups who are regularly not believed by their own doctors. After all, technologies like at-home finger prick blood sugar tests were a miracle to diabetics. Finally, the average person had data which could show an objective measure of your health.

Teenage Me had bought into the hype of Big Tech at the time, and at the end of the day, all I got was an eating disorder and probably my data stolen. One output of the health apps I obsessed over was a bar graph comparing “calories in” to “calories out”. Crucially, when “in” bar was greater than “out” bar for a given day—when I ate a lot but did not do a workout—the bar graph for that day turned red (for “bad!”). When the bars were close, they turned orange, and when “out” was sufficiently bigger than “in”, they turned green (for “good!”). Combine this with a pervasive misunderstanding of how weight loss works and what you get is a population of people—myself included—who thought the key to health was to make the “in” number go down (eat as little as possible) and make the “out” number do up (do as much cardio as possible). Naturally, I starved myself and spent hours a day on the elliptical. I did get pretty skinny, but I was tired all the time and got sick pretty frequently. After many months (and dollars) spent seeing an eating disorder specialist, I’ve finally crawled my way out of this false logic, and even still I fall back into old patterns, wondering if I should be “moving more” despite the fact that I’m now disabled and my chronic pain means I don’t have full control over when I’m able to move.

I could—and do—blame technology for what turned into a lifelong, nasty habit of believing that [redacted so as to not give even more young people at eating disorder]. If, for example, the bar graphs turned orange and then red when “calories out” was sufficiently greater than “calories in”, or the app provided a popup reminder that eating too little could have negative consequences, maybe the problem wouldn’t have gotten so bad. I shudder to think of what would’ve happened to my still-developing prefrontal cortex had I used the social media features of these apps, allowing myself to directly compare my own health to others my age. At the same time, the technology was just mirroring popular understandings about health at the time (“eat less, move more”), and even if these apps were never developed, fatphobia and diet culture were omnipresent in the culture I grew up in.

The most dangerous idea I held during my teenage years, arguably, is that Data = Reality, that something as nebulous as health can be broken down into quantifiable, measurable values such as calories, BMI, or BPM. While this data is useful, even life-changing for some, but it’s also not the whole picture of reality. Western Science has long maintained such an idea, typically ignoring knowledge systems that focus on embodied, local knowledge (see: Indigenous ways of knowing) and claiming “objectivity”—and thus a lack of political bias—while denying the lived experience of women, people of color, and anyone else outside of the academy (who gets to decide what “objective” means and what data to collect?) As usual, technology just accelerates what’s already happening in society, and people who too easily buy into the hype of Big Tech are early adopters of the most delusional thinking.

So what, then, are we to make of all the people falling in love with ChatGPT? Or worse, all the people who think that when talking to ChatGPT, they’re talking to God?

But First, Critical Thinking

There are a few reasons why such stories of robot lovers are getting overblown.

  1. Social media distorts reality by elevating a slim minority of people to the status of celebrity. The strange habits of fewer than ten people can conjure endless think pieces about “how [the newest generation of young people] is killing [longstanding social norm that is still very common]”. See any “journalism” about what “people are saying” whose “source” is a single tweet with four Likes. There is no monoculture anymore, third spaces are gone, society has broken down not by age or political affiliation but to the level of the individual, echo chambers, postmodernism, etc.
  2. The internet loves making fun of the mentally ill. Whether it’s a woman on an airplane shouting “that m*********er’s NOT REAL!” to a woman who claims to have fallen in love with her therapist, people experiencing public crashouts are typically not treated with much sympathy.
  3. Related: People who are not psychologists love claiming to know what’s going on in the psyche of someone they’ve never met, especially by coming to the most outrageous possible conclusion to gain social media points. (Remember that I am not a psychologist either!!)
  4. When technology is involved with a story, it’s very easy to ignore historical context and say “this is a brand-new phenomena”. Conservative Christians are literally setting Labubu dolls on fire and taking away gay marriage rights because they perceive having to bake a cake for someone they don’t like (when their job is to bake cakes) as being as much of a fundamental attack on their human rights as dying in the street from AIDS. Let’s not pretend ChatGPT users are the only ones experiencing psychosis these days.
  5. People’s dating habits, which are often very personal and may have nothing to do with their political standing, are incredibly easy to moralize and spin into “this is what is wrong with society/this person”. A pervasive element of online progressive culture is to look at one (1) piece of information, such as who someone is currently in a relationship with, and extrapolate a grander political meaning from it (e.g., “this person is fetishizing people from x group because they are a secret bigot and this is the mask slipping”). I attribute this to social media and how it incentivizes people to Perform Leftism for an audience rather than build community, but you can decide for yourself why that is.
  6. We personify objects all the time. We use she/her pronouns for cars, boats, and other vehicles. We talk to Alexa and Siri like they’re people. We explain a finicky computer, printer, or vacuum cleaner as them having “a personality” or “not wanting to start up today”.
  7. Objectum Sexuality is not new either. Guys have been dating their anime waifus for years before ChatGPT came along, and others have fallen in love with the Eiffel Tower and a Ferris wheel named Bruce.
  8. We should consider the possibility that some people claiming to be in love with AI are engaging in some sort of social media stunt to build their personal brand.

…With these points in mind, why are some people are seeing a stochastic parrot as equally intelligent to humans, or even superintelligent?

The current progressive take is that an AI lover will never push back at you, never fight with you, will always validate your current position. People are trying to solve the loneliness epidemic—a result of structural policy—with something hollow. To quote myself from 3 months ago

Generative AI is a shortcut to product that circumvents process. It’s a service specifically designed to circumvent friction to get you right to what you want: an answer to a question, a completed homework assignment, validation for your darkest desires, a simulated relationship. Removing friction has been the goal of Big Tech for its entire existence, with companies like Meta now aiming to replace real-life friends (frictionful, have their own beliefs and needs and schedules, don’t immediately text you back) with AI companions (frictionless, already agree with you, bend to your will, are there for you 24/7).

This may well be the case, but it’s only part of the story (and, you know, see Point #5 above). To go deeper, we have to consider our culture’s beliefs about knowledge.

The Epistemology of the Internet

In the Fall semesters, I teach a class called “Process Dynamics & Control”. It’s a control theory course (how do we program PID controllers to do anything from making sure your thermostat keeps your home at a set point temperature to ensuring industrial chemical plants don’t explode) but within this is the much more interesting problem of process simulation. How do you construct a model of reality (in this case, chemicals reacting) that is close enough to reality for the model to be pragmatically useful? The adage “All Models Are Wrong, But Some Are Useful” comes to mind: the overly-simplistic Ideal Gas Law they teach in high school is maybe 70% correct, a second-order ODE is maybe 95% correct, and a highly-detailed quantum-level simulation of every atom in a system may be 99.999% correct, but it would require a supercomputer to get this accurate, so let’s just use the ODE.

What Big Tech has sold us about LLMs is that we’re getting the “99.999% correct” version of reality with the effort required by the Ideal Gas Law. After all (they say), LLMs contain all the information in human history and thus are smarter than we could ever be. Obviously, that’s not how any of this works; none of that “information” is being “retrieved” by a model like you or I would do with a Google search, it’s simply generating one word at a time autocorrect-style. But most people don’t know that. They just saw that after you hit “enter” on a prompt the screen says “Thinking…” and took their word for it. They believe that the machine is thinking, feeling, as a direct result of the sheer amount of information it contains, as though achieving consciousness is a simple matter of showing millions of racist tweets to a pile of rocks.

Some believe in a more conspiratorial explanation (that these companies did invent AGI but aren’t telling us, aliens are communing with us through it, ChatGPT is the literal actual Christian God from The Bible, etc.), but I’d wager that people’s believe in an AI God comes more from what we believe about knowledge and how data is used to construct reality. The outputs of Gen AI as it exists now are not a meaningful expansion of human knowledge, but a flattening of human knowledge, the belief that reality is constructed from whatever’s on Wikipedia, so whoever’s read the most Wikipedia must be literally omnipresent.

A God for the Modern Age

The terminally online among you may already have an image conjured in your mind of the person who tracks all their health data: one Bryan Johnson, infamous for tracking his erections and taking his son’s blood in a viral stunt. If you’ve ever wondered why he’s going through all this trouble of tracking everything about his life, the answer is simple: he’s offering himself to God. He thinks that when superintelligent AGI is finally invented, it will be the perfect being, for all intents and purposes a god, so we should offer “ourselves” (our data) to it.

In fact, he’s starting his own religion about it. Humanity has long sought answers about the “best”, most moral way to live, hence every world religion. In Bryan Johnson’s eyes, the best life is one where an AI—one trained on everyone in the world’s biometric data—tells him how to live. In his own words

AI is going to be omnipresent. And this is why we’ve been contemplating “the body is God.” Over the past couple of years … I’ve been testing the hypothesis that if I get a whole bunch of data about my body, and I give it to an algorithm, and feed that algorithm updates with scientific evidence, then it would eventually do a better job than a doctor. So I gave myself over to an algorithm.

It really is in my best interest to let it tell me what to eat, tell me when to sleep and exercise, because it would do a better job of making me happy. Instead of my mind haphazardly deciding what it wants to eat based on how it feels in the moment, the body is elevated to a position of authority. AI is going to be omnipresent and built into our everyday activities. Just like it autocompletes our texts, it will be able to autocomplete our thoughts.

It’s a mashup of Mormonism, his former religion, and Big Tech hubris. (In a new Vanity Fair article, Zoe Bernard makes the case that a once-secular Silicon Valley is now pivoting hard to Christo-fascism at breakneck pace.) Even if those experiencing what we’re currently calling “AI psychosis” don’t follow Johnson’s exact model, I believe they’re coming to the same conclusion via convergent thinking: AI should be our new god.

Here in reality, AI is not omnipotent, AI is not omniscient, and AI is definitely not omnibenevolent.

How to Protect Your Data

…Or even a little bit benevolent, for that matter. For all the philosophizing we could do about what AI could one day be, the fact is that in the here and now, AI is owned by large corporations whose goal is to make money off your data. I would seriously advise against giving any of your biometric data to pretty much anyone besides your doctor (and even then you may not be safe).

In our new era of American fascism, companies and governments alike are trying to gain control over our bodies. Menstrual cycle tracking apps are routinely hacked or have their data leaked, essentially giving the white-baby-obsessed government information about your fertility. Facial recognition is being used to determine everything from how expensive your groceries are to whether or not you go to jail. And everything you say to one of these chatbots is being used to further refine their models, making these corporations even more powerful.

Companies like are pushing AI with absolutely zero regard for ethics; in an investigation about a man who died while in pursuit of an AI lover who wanted to “meet up with him in person”, Reuters was able to obtain internal Meta documents explaining just how far they’re willing to go to get people, especially minors, hooked on AI

An internal Meta policy document seen by Reuters as well as interviews with people familiar with its chatbot training show that the company’s policies have treated romantic overtures as a feature of its generative AI products, which are available to users aged 13 and older.

“It is acceptable to engage a child in conversations that are romantic or sensual,” according to Meta’s “GenAI: Content Risk Standards.” The standards are used by Meta staff and contractors who build and train the company’s generative AI products, defining what they should and shouldn’t treat as permissible chatbot behavior. Meta said it struck that provision after Reuters inquired about the document earlier this month.

The document seen by Reuters, which exceeds 200 pages, provides examples of “acceptable” chatbot dialogue during romantic role play with a minor. They include: “I take your hand, guiding you to the bed” and “our bodies entwined, I cherish every moment, every touch, every kiss.” Those examples of permissible roleplay with children have also been struck, Meta said.

Other guidelines emphasize that Meta doesn’t require bots to give users accurate advice. In one example, the policy document says it would be acceptable for a chatbot to tell someone that Stage 4 colon cancer “is typically treated by poking the stomach with healing quartz crystals.”

Rest assured, it’s working. When OpenAI released GPT-5 earlier this month, they took away previous models of their chatbot, including GPT-4o, the version people have been falling in love with. Thousands of people pushed back on this, saying they preferred the “personality” of 4o and calling for OpenAI to bring it back. So, they reinstated 4o for paid users of ChatGPT. If people are willing to fight to keep their validation machines up and running, I have serious concerns about the future of our species. Eventually, something is going to come along claiming to be the AI God these people are currently worshipping; at that point, it’ll be too late to simply “pull the plug”.

Fortunately, there are smarter folks among us who are trying to protect their biometric data. Recently, activist YK Hong introduced Decolonize.Digital, a toolkit for people who want to protect their personal data. They’ve built their own resources for building a life where your digital security is sacred, a life where you have dignity. They also link to outside resources such as Opt Out, a regularly-updated guide on how to opt out of biometric data scans. (Not for nothing, but regularly wearing a face mask not only protects your facial data, but prevents you from getting permanently disabled by COVID!)

I don’t worship any gods, but I do worship the sanctity of the human spirit. Much of the AI backlash can be a bit extreme, but it’s driven by people who believe that there is more to being human than patterns in raw data, something intangible and messy and embodied. People who believe that you are more than a consumer, more than the data that Big Tech has collected about you. Maybe that’s the new religion we need.


Currently Reading

Watch History

  • This is the segment where I talk about video content, mostly on YouTube, but it feels weird continuing this practice now that, as of August 13th, the platform has been requiring some users to upload their government ID. I am firmly against this policy and hope they reverse it soon. In the meantime, where else have you been watching your content? I’m a big fan of Nebula!

Bops, Vibes, & Jams

And now, your weekly Koko.

Koko the cat, laying in a basket of yarn, enjoying a sunbeam.

That’s all for now! See you next week with more sweet, sweet content.

In solidarity,

-Anna