If the machine predicts that you will take both Boxes A and B, Box B will be empty. But if the machine predicts that you will take Box B only, then Box B will contain $1,000,000,000. The machine has already done it’s prediction and the contents of box B has already been set. Which box/boxes do you take?

To reiterate, you choices are:

-Box A and B

-Box B only

(“Box A only” is not an option because no one is that stupid lol)

Please explain your reasoning.

My answer is:

spoiler

I mean I’d choose Box B only, I’d just gamble on the machine being right. If the machine is wrong, I’ll break that thing.


This is based on Newcomb’s Paradox (https://en.wikipedia.org/wiki/Newcomb’s_paradox), but I increased the money to make it more interesting.

  • TauZero@mander.xyz
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    Here’s my solution to Newcomb’s Paradox: the predictor can be perfectly infallible if it records your physical state and then runs a simulation to predict which box you’ll pick. E.g. it could run a fancy MRI on you as you are walking through the hallway towards the room, quickly run a faster-than-real-time physical simulation, and deposit the correct opaque box into the room before you open the door. The box, the hallway, the room, the door are all part of the simulation.

    Here’s the thing: a computer simulation of a person is just as conscious as a physical person, for all intents of “consciousness”. So as you are inside the room making your decision, you have no way of knowing if you are the physical you or the simulated you. The predictor is a liar in a way. The predictor is telling the simulated you that you’ll get a billion dollars, but stating the rules is just part of the simulation! The simulated you will actually be killed/shut down when you open the box. Only the physical you has a real chance to get a billion dollars. The predictor is counting on you to not call it out on its lie or split hairs and just take the money.

    So if you think you might be in a simulation, the question is: are you generous enough towards your identical physical copy from 1 second ago to cooperate and one-box? Or are you going to spitefully deprive them of a billion dollars by two-boxing just because you are about to be killed anyway? Remember, you don’t even know which one you are. And if you are the spiteful kind, consider that we are already making much smaller time-cooperative trade-offs all the time, such as the you-now taking a breath just so that the you-five-seconds-from-now doesn’t suffocate to death.

    What if the predictor doesn’t use a MRI or whatever? I posit that whatever prediction method it uses, if the method is sufficiently advanced to be infallible then somewhere in the process it MUST be creating conscious observer instances.