Skip to content

Are we living in a simulation?

This paper got some attention and amusement when it came out a few years ago:

Constraints on the Universe as a Numerical Simulation

Observable consequences of the hypothesis that the observed universe is a numerical simulation performed on a cubic space-time lattice or grid are explored. The simulation scenario is first motivated by extrapolating current trends in computational resource requirements for lattice QCD into the future. Using the historical development of lattice gauge theory technology as a guide, we assume that our universe is an early numerical simulation with unimproved Wilson fermion discretization and investigate potentially-observable consequences. Among the observables that are considered are the muon g − 2 and the current differences between determinations of α, but the most stringent bound on the inverse lattice spacing of the universe, b−1>∼ 1011 GeV, is derived from the high-energy cut off of the cosmic ray spectrum. The numerical simulation scenario could reveal itself in the distributions of the highest energy cosmic rays exhibiting a degree of rotational symmetry breaking that reflects the structure of the underlying lattice.

A friend brought up this question in a Facebook thread recently. Here’s my response.

OK here’s the thing. If the function of the simulation is to simulate physics, for whatever reason, and our awareness is epiphenomenal to that goal, then it would make sense to run it as a finite-element simulation like the cubic grid posited in that paper. However, on the other hand, if you ask the question “what would be the most efficient way to simulate what I seem to be experiencing” the answer is undoubtedly to have a neural-net-based simulation generating your own consciousness and feeding it stimuli based on an attention-driven sparse simulation of only the relevant parts of the rest of the universe. In that structure of simulation, there’s no reason at all why you would see any artifacts in the outside world because the outside world is generated only to an awareness-centric spec.

It’s also possible that it could be a hybrid, with a global universe buffer simulation used to provide an optimized sparse consistency across different minds’ awarenesses.

In any case, you run into the same problem philosophy has run into for all of time, which is that YOU don’t actually have any proof that the rest of us aren’t NPCs. The simulation has to generate a lot of information for you to feel like you are you, but it only has to generate an insignificantly small fraction of that amount of information for you to perceive me, so it wouldn’t really seem to be efficient to simulate me at the same level of richness as it’s simulating you, merely for the purpose of filtering out 99.9999999% of all that and feeding you the tiny remainder.

The standard response to that is “yeah, but it’s unlikely that you’re so special, so don’t fall into the trap of solipsism”. But that doesn’t actually make sense. You ARE absolutely that special; precisely because you have overwhelming, direct evidence for the presence of your own mental processes, and barely any evidence at all for the presence of anyone else’s mental processes. The part of the simulation that is YOU is special because it’s far and away, no contest whatsoever, the densest part of the simulation. You can argue with me all you want, but it’s prima facie that you’re experiencing you more than you’re experiencing me, so any objection you might make is pretty hollow.

Of course I’m writing this based on what I’m experiencing, but if you’re not an NPC and you’re reading this, then it applies equally well to you. And there’s no reason you should believe there’s a fully simulated consciousness here (me) if the only thing you’re experiencing is just yet another wanky out-there philosophical internet blog blather. So I’m not going to flatter my(potentially-NPC)self by claiming to be real.

Enjoy the ride

 

Ontological Depressive Realism

People take comfort in a wide range of philosophical, theological, and ontological beliefs, such as life after death or a loving and supportive deity, but scientific psychology is skeptical. The therapeutic tension can be resolved without needing to solve the underlying philosophical questions by ontological depressive realism: As CBT theory has recognized that unrealistic optimism can be a part of optimal mental health, this can apply to ontological questions of life after death, etc. This will allow the scientific and clinical community to work with the full range of possible beliefs and (purported) experiences without needing to accept, or even address, whether these things are based on any sort of objective truth.

AI chat bots the easy way

tl;dr: The only thing you need to do to make a convincing AI chat bot is not to fuck it up.

The goal of sentient AI research has been, in a sense, nearly universally misconstrued: the goal is to make something that we experience as sentient, not to make something that is sentient in an absolute sense. This is the only meaningful goal, really, because no one has direct access to anyone else’s sentience; the existence of any sentience or intelligence other than your own is only something you infer, not perceive.* In fact, note that the only widely used definitional test for intelligence is the Turing Test, which is, in fact, a test for precisely the criterion I’m stating here: whether other humans perceive the test subject as intelligent, not whether the test subject is objectively intelligent.

Why is this? Well, to start at the beginning:

Our brains are designed to take incoming sensory information and find patterns in it and organize it into percepts. Additionally, we have special circuits dedicated to finding percepts of conspecifics (i.e. other creatures like ourselves) and other non-conspecific creatures, because that was especially survival-relevant during our evolution.

Two other processes work together here. Matching a pattern is intrinsically rewarding (that’s why we like puzzles, for example). And, perceptual circuits are tuned to be overly “grabby” because it’s better to mistakenly think we saw a hungry predator and then realize we were wrong, then to mistakenly think we didn’t see a hungry predator and then realize we were wrong.

Because of those two things, we naturally anthropomorphize. To put it another way, anthropomorphization isn’t some extra weird hyperactive imagination that primitive peoples had (which is how it is traditionally presented), but rather represents the normal functioning of our brains in the context of the information that is available in a given circumstance. One good example is the Heider-Simmel Illusion, widely studied in psychology and neuroscience. Another is ELIZA, a shockingly simple chat bot created in the early 1960s based on Rogerian psychotherapy principles, which despite its simplicity is said to have convinced some people to share some of their innermost thoughts and feelings.

So, if you present someone with a system that can be perceived as an intelligence, it will be… unless some other factor intervenes to prevent that from happening. (If the patterns almost line up with a percept of a human, but are slightly off, then one experiences what’s known as “uncanny valley”. )

An important consequence of this is that the lower the bandwidth, the easier it is to be convincingly intelligent. ELIZA was convincingly intelligent when the sensory and conceptual bandwidth was limited to the territory of only a Rogerian psychotherapist communicating through a teletypewriter. A savvy modern person, expecting a cheesy chat bot, will have that expectation almost instantly confirmed, the illusion broken or never formed; a person in the 1960s encountering ELIZA unexpectedly, with no context for what a responsive terminal might represent other than another person, fell into that percept and in some cases stayed there quite a while. Indeed, the strategy behind Eugene Goostman, a bot widely but naïvely reported to have “passed the Turing Test” in 2014 was simply to reduce the bandwidth of available interactions by presenting the character as a 13-year-old Ukrainian boy with little cultural knowledge in common with the judges.

Thus, one major obstacle to a successful AI chat bot is that if the product is released and people expect a fully general intelligence, the conceptual bandwidth of the experience is too great and people will quickly find the holes.

Another obstacle stems from the “unless some other factor intervenes” caveat. As I said above, someone in the 1960s stumbling across ELIZA with no context to perceive an interactive conversation as originating with anything other than a human, will have that perception reinforced and will stick with it (for a little while, at least). But a person approaching ELIZA with the attitude “I know this is not intelligent, I’m not going to be impressed until it convinces me” is not going to be impressed. The key here, though, is that that’s not a problem with AI; that’s just how our perception of intelligence or sentience works. In Capgras Syndrome, a patient physically recognizes the appearance, voice, and other physical characteristics of a familiar person, but becomes unable to recognize their personality or sense of personhood and so believes they are an impostor or “body snatcher”. They will be quite explicit in admitting how convincing the substitution is, even admitting that specific speech patterns, facial expressions, personality quirks, etc. are consistent, but nevertheless the percept of “this is this person” never forms. (Well, never is an overstatement; people often recover from this delusion eventually.)

Now imagine that, for whatever reason, you believed that you were about to receive a text message from a chat bot and engage in a conversation, when in fact there was a live person at the other end of the line. How many text messages would it take for you to be guaranteed, 100% convinced that it was a person and not a bot? If you started out believing it was a bot, and you’re familiar with the idea of text bots, then I guarantee you that even if you came around quickly to the opinion that boy, this is a pretty good bot, I didn’t know they had become so advanced, it would still take a long time for you to fully change your mind and believe it was a human—or you might never! (You can try out a variation of this theme here.)

The point of these thought experiments is that the perception of an Other as sentient is, to a large extent, determined by one’s expectations. Although the human mind is promiscuous in recognizing sentience everywhere, overcoming an initial set of expectations about a percept of sentience (or non-sentience) is very difficult. And this poses another obstacle to the development of a convincing AI chat bot.

In conclusion, the task of AI can be reframed as the task of making a system that is perceived as being intelligent rather than the task of making a system that is intelligent in some objective sense, and this framing of the task is more consistent with the criterion that is already being used for success, as well as being more consistent with how the brain actually works. Framed thus, the task can theoretically be accomplished quite simply (not necessarily easily, but simply) if one is able to sufficiently narrow the scope of the interactions. However, it is a very difficult task to overcome a prior belief that the bot is unconvincing; and behavior flaws which clearly do not fit into the parameters of the percept also can quickly damage the illusion.


*Actually, the existence of your own sentience is only inferred, not perceived, and in fact the inference is done by the same brain circuitry that infers others’ sentience. Note that there’s a self-oriented analogue of Capgras Syndrome, the Cotard Delusion, where one believes that one’s own self does not exist (or, is dead). However, this is a much more profound discussion that I’ll leave for another time.

Nonparametric tests

Just compiled this for a friend, putting it here for the world. She was trying to understand a paper so she could do a similar analysis and she was getting confused by what I was saying. It turned out that the paper used the wrong test: they used Kruskal-Wallace when they should have used Friedman, because they had repeated measurements in the same monkeys. It’s kind of a mess.

Basic single-factor statistical tests, “ordinary” parametric statistics:

 Parametric Tests Independent Samples Non-independent Samples (Repeated Measures)

More than
two samples

One-way ANOVA One-way repeated-measures ANOVA
Two
samples
T-test Paired
T-test

Basic single-factor statistical tests, non-parametric statistics:

 Non-Parametric Tests Independent Samples Non-independent Samples (Repeated Measures)

More than
two samples

Kruskal-Wallace Test Friedman Test
Two
samples
Mann-Whitney
U-test
Wilcoxson Signed Rank Test

How to write a cover letter: a procedure based on Cognitive Work Analysis

I took a class on Cognitive Work Analysis from Dr. John D. Lee here at the UW. I found it very interesting. A friend was asking me for help writing cover letters, specifically how to edit it down to be short enough. I don’t have a lot of experience writing cover letters per se, but I wrote this procedure based on CWA. I’d be interested in feedback, of course.

  1. Write the letter without worrying about brevity. It’s always normal to write something too long first, and then cut it down in the next rounds of editing.
  2. Make a new document (or just add text at the end of this one, or whatever works for you) and copy and paste each sentence as its own individual bullet point in a list. Don’t worry about sentence structure or whether the sentences themselves are too long; you’re just making a list of each fact or item you’re saying about yourself.
  3. Go through the list of points and separate them out into two lists: (a) Critical points that you really want to make sure they notice about you right away (it’s OK if some or all of those points are also in your resume); and (b) everything else, i.e. not critical.
  4. Go through the “not critical” list and check each item against your resume. If an item is already in your resume, then delete it from the letter. If it is not already in your resume, then copy it over into a third list called “Decide what to do”.
  5. Now you have two lists: the “critical” list, and the “decide what to do” list. (You already deleted everything else.) Everything in the “critical” list stays in the letter, obviously.
  6. Go through the “decide” list point-by-point. For each point, decide if you want to add it to your resume, or if you want to add it to your letter, or if it’s just not important. Or maybe it’s really important and you add it to the letter AND you add it to your resume. As you do this, keep in mind how much space you have left in the letter.
  7. Now go back to the letter and adjust the sentence structure and paragraph structure to make it flow again.
  8. Take a step back and look at the letter. Is it short enough now? If so, then you’re done! If it’s still too long, can you make your sentences shorter and clearer?
  9. If you can’t make it short enough with basic editing, then repeat this procedure from the beginning, but focus more on the question of what’s really critical. You have to decide on what’s critical, on the basis of how much space you actually have. If you’re trying to fit in 5 critical points but you only have room for 3, then you have to ask yourself: Since I only have room for 3, which are my top 3? If you ask yourself the question in that way, then you will almost certainly be able to pick your top 3. (Or whatever number you have room for.)
  10. If you have trouble separating out the critical points from the less critical points, then try this: Make the list of points, then write them out in rank order from most critical to least critical. Just do it quick-and-dirty, don’t worry about getting the order exactly right. Then show that ranked list to someone else, along with the job description. Almost anyone will be able to give you good feedback on whether your ranking makes sense for that job description.

Dharma dog piano therapy

I was trying to remember where I heard this story and I finally found it. It’s from Warrior King of Shambhala by Jeremy Hayward.

During Rinpoche’s visit to New York, an event occurred which is a beautiful illustration of Rinpoche’s use of humor to break through to the heart of his students. Madeline Bruser, a talented young concert pianist and piano teacher, was already a student of Rinpoche, though she had not yet met him. Madeline tells this story of the first time she met him:

I had offered to play for him during his stay in New York and he accepted. So, one evening, I went to his suite, where about a dozen of us were gathered. When he entered the room, I felt so relaxed in his presence that I walked right up to him and said, “Hello, I’m going to play for you tonight.” And he said, “Oh! You’re going to play with me!” After several minutes of silence punctuated by a few bits of conversation between him and all of us, he walked slowly over to the piano, sat down, and started slapping at the keys as though it were a big joke. I began to feel quite nervous. Then he said, “Now YOU play,” and he stood up. I sat down at the piano, but he remained standing. Aren’t you going to sit down?” I asked him. And instead of sitting down, he picked up his little dog and stood next to the piano, waiting for me to begin. Since he was standing, everyone else had to stand also.

I launched into a dramatic performance of Beethoven’s deeply serious “Sonata in A-flat,” Opus 110. The lid of the piano was slightly raised, and soon after I started to play, Rinpoche put his dog—a very cooperative and furry lhasa apso—on the piano. Over and over, the dog slowly slid down the slanted lid as I continued to huff and puff my way through Beethoven’s intense, lofty, lyrical first movement. At times, instead of putting the dog back onto the piano, Rinpoche beat the time with one hand, making more of a joke out of the music. The twelve guests giggled, and I felt humiliated yet exhilarated. At one moment I tried to challenge him by looking directly and boldly at him, but he just peered over his glasses at me and left me feeling completely powerless.

Suddenly, a minute or so into the rollicking second movement, something switched. I found myself playing with an amazing freedom and energy that I’d never known was possible. The music leapt out of me and burst brilliantly into the room like a force of nature. It was tremendously liberating, and I noticed that Rinpoche was now holding his dog and listening attentively. I played with this total abandon for about two minutes, but it was so disorienting that I reverted back to my habitual overblown approach, and Rinpoche gave the dog more rides down the piano lid. Thus went a twenty-minute performance of one of the most profound pieces of music ever written. Beethoven and I had come into contact with an enlightened audience. The next day, I could no longer play the old way. I had received the best piano lesson of my life from a man who never played the instrument.

Happy Valentine’s Day!

ValentineMignard

Image: “Time clipping Cupid’s Wings”, Pierre Mignard, 1694.

Christianity pre-modernity was very much interested in the teaching of impermanence, just like Buddhism.

A little more about this painting can be found here.

In the meanwhile, this goes out to all my single friends on Valentine’s day!