Newcomb team: the story

This is a summary of the project of the team focussing on decision theory, AIXI and quasi bayesian agents.

Origins

The origin of the project of Newcomb team is based on a comment by Vanessa Kosoy. The question was: how does AIXI behave in Newcomblike situations? AIXI is a dualistic agent, but it is not clear whether it behaves more like a CDT or EDT agent. Thus the first task was to deepen the understanding on these theories, resulting in a post on EDT. Eventually, this would also result in another two draft posts when the sections on comparability of counterfactuals and embedded vs. external, were split out into their own article.

Exploration and first results

Davide and Chris then  encountered some interesting phenomena. One of them was stuck exploration where an agent committed to perform action A next time it saw state S. However, in problems like Parfit’s Hitchhiker with predictors, taking action committing to take action A meant that another agent would never let you experience state S so the pending exploration never resolved. Chris found a Chinese paper on a similar topic. It cost him $50USD to access it and they had to read the Google translate version like 3 times to make sense of it, but they finally managed to understand it in the end. They are still deciding whether or not it is worthwhile producing a summary of the paper.

After analysis, it became clear that AIXI is much more like a CDT agent than an EDT since AIXI and CDT are dualistic and only model actions as having forward causation. We wrote a draft post clarifying this and explaining why the arguments for EDT and either CDT or EDT were wrong.

Davide wrote up an article explaining how AIXI works in several scenarios like Counterfactual Mugging, ect. During this, we discovered that we didn’t completely understand AIXI and it is possible that we might produce a blog post explaining how it behaves in more detail

Prague

Chris and Davide looked into Quasi-Bayesian agents since Vanessa had argued that it allowed learning Counterfactual mugging. Davide wrote up a summary of Quasi-Bayesian agents. After investigation both Chris and Davide became more skeptical and Chris wrote up a draft post on his doubts.

In Prague, Pablo joined. He provided significant feedback on the draft posts and produced his own post on causal inference diagrams. The post described a flaw with agents that assumed that their utility function would still be the same in the future – he noted that they could be bribed to change their utility function for 1 utility. These agents were supposed to simultaneously support both feedback tampering and reward tampering, but this hole rendered them useless. However, he found that we didn’t need to solve both feedback tampering and reward tampering at the same time as they occurred in different situations

At the camp, Chris further refined his draft posts, at some points splitting sections into their own articles. We think our analysis is mostly correct, but there are still some edge cases which we need to find a solution for.