Ontological Depressive Realism

People take comfort in a wide range of philosophical, theological, and ontological beliefs, such as life after death or a loving and supportive deity, but scientific psychology is skeptical. The therapeutic tension can be resolved without needing to solve the underlying philosophical questions by ontological depressive realism: As CBT theory has recognized that unrealistic optimism can be a part of optimal mental health, this can apply to ontological questions of life after death, etc. This will allow the scientific and clinical community to work with the full range of possible beliefs and (purported) experiences without needing to accept, or even address, whether these things are based on any sort of objective truth.

AI chat bots the easy way

tl;dr: The only thing you need to do to make a convincing AI chat bot is not to fuck it up.

The goal of sentient AI research has been, in a sense, nearly universally misconstrued: the goal is to make something that we experience as sentient, not to make something that is sentient in an absolute sense. This is the only meaningful goal, really, because no one has direct access to anyone else’s sentience; the existence of any sentience or intelligence other than your own is only something you infer, not perceive.* In fact, note that the only widely used definitional test for intelligence is the Turing Test, which is, in fact, a test for precisely the criterion I’m stating here: whether other humans perceive the test subject as intelligent, not whether the test subject is objectively intelligent.

Why is this? Well, to start at the beginning:

Our brains are designed to take incoming sensory information and find patterns in it and organize it into percepts. Additionally, we have special circuits dedicated to finding percepts of conspecifics (i.e. other creatures like ourselves) and other non-conspecific creatures, because that was especially survival-relevant during our evolution.

Two other processes work together here. Matching a pattern is intrinsically rewarding (that’s why we like puzzles, for example). And, perceptual circuits are tuned to be overly “grabby” because it’s better to mistakenly think we saw a hungry predator and then realize we were wrong, then to mistakenly think we didn’t see a hungry predator and then realize we were wrong.

Because of those two things, we naturally anthropomorphize. To put it another way, anthropomorphization isn’t some extra weird hyperactive imagination that primitive peoples had (which is how it is traditionally presented), but rather represents the normal functioning of our brains in the context of the information that is available in a given circumstance. One good example is the Heider-Simmel Illusion, widely studied in psychology and neuroscience. Another is ELIZA, a shockingly simple chat bot created in the early 1960s based on Rogerian psychotherapy principles, which despite its simplicity is said to have convinced some people to share some of their innermost thoughts and feelings.

So, if you present someone with a system that can be perceived as an intelligence, it will be… unless some other factor intervenes to prevent that from happening. (If the patterns almost line up with a percept of a human, but are slightly off, then one experiences what’s known as “uncanny valley”. )

An important consequence of this is that the lower the bandwidth, the easier it is to be convincingly intelligent. ELIZA was convincingly intelligent when the sensory and conceptual bandwidth was limited to the territory of only a Rogerian psychotherapist communicating through a teletypewriter. A savvy modern person, expecting a cheesy chat bot, will have that expectation almost instantly confirmed, the illusion broken or never formed; a person in the 1960s encountering ELIZA unexpectedly, with no context for what a responsive terminal might represent other than another person, fell into that percept and in some cases stayed there quite a while. Indeed, the strategy behind Eugene Goostman, a bot widely but naïvely reported to have “passed the Turing Test” in 2014 was simply to reduce the bandwidth of available interactions by presenting the character as a 13-year-old Ukrainian boy with little cultural knowledge in common with the judges.

Thus, one major obstacle to a successful AI chat bot is that if the product is released and people expect a fully general intelligence, the conceptual bandwidth of the experience is too great and people will quickly find the holes.

Another obstacle stems from the “unless some other factor intervenes” caveat. As I said above, someone in the 1960s stumbling across ELIZA with no context to perceive an interactive conversation as originating with anything other than a human, will have that perception reinforced and will stick with it (for a little while, at least). But a person approaching ELIZA with the attitude “I know this is not intelligent, I’m not going to be impressed until it convinces me” is not going to be impressed. The key here, though, is that that’s not a problem with AI; that’s just how our perception of intelligence or sentience works. In Capgras Syndrome, a patient physically recognizes the appearance, voice, and other physical characteristics of a familiar person, but becomes unable to recognize their personality or sense of personhood and so believes they are an impostor or “body snatcher”. They will be quite explicit in admitting how convincing the substitution is, even admitting that specific speech patterns, facial expressions, personality quirks, etc. are consistent, but nevertheless the percept of “this is this person” never forms. (Well, never is an overstatement; people often recover from this delusion eventually.)

Now imagine that, for whatever reason, you believed that you were about to receive a text message from a chat bot and engage in a conversation, when in fact there was a live person at the other end of the line. How many text messages would it take for you to be guaranteed, 100% convinced that it was a person and not a bot? If you started out believing it was a bot, and you’re familiar with the idea of text bots, then I guarantee you that even if you came around quickly to the opinion that boy, this is a pretty good bot, I didn’t know they had become so advanced, it would still take a long time for you to fully change your mind and believe it was a human—or you might never! (You can try out a variation of this theme here.)

The point of these thought experiments is that the perception of an Other as sentient is, to a large extent, determined by one’s expectations. Although the human mind is promiscuous in recognizing sentience everywhere, overcoming an initial set of expectations about a percept of sentience (or non-sentience) is very difficult. And this poses another obstacle to the development of a convincing AI chat bot.

In conclusion, the task of AI can be reframed as the task of making a system that is perceived as being intelligent rather than the task of making a system that is intelligent in some objective sense, and this framing of the task is more consistent with the criterion that is already being used for success, as well as being more consistent with how the brain actually works. Framed thus, the task can theoretically be accomplished quite simply (not necessarily easily, but simply) if one is able to sufficiently narrow the scope of the interactions. However, it is a very difficult task to overcome a prior belief that the bot is unconvincing; and behavior flaws which clearly do not fit into the parameters of the percept also can quickly damage the illusion.


*Actually, the existence of your own sentience is only inferred, not perceived, and in fact the inference is done by the same brain circuitry that infers others’ sentience. Note that there’s a self-oriented analogue of Capgras Syndrome, the Cotard Delusion, where one believes that one’s own self does not exist (or, is dead). However, this is a much more profound discussion that I’ll leave for another time.