2016-07-13 - We study Markov decision problems where the agent does not know the transition probability function mapping current states and actions to future states. The agent has a prior belief over a set of possible transition functions and updates beliefs using Bayes’ rule. We allow her to be misspecified in the sense that the true transition probability function is not in the support of her prior. This problem is relevant in many economic settings but is usually not amenable to analysis by the researcher. We make the problem tractable by studying asymptotic behavior. We propose an equilibrium notion and provide conditions under which it characterizes steady state behavior. In the special case where the problem is static, equilibrium coincides with the single-agent version of Berk-Nash equilibrium (Esponda and Pouzo, 2016). We also discuss subtle issues that arise exclusively in dynamic settings due to the possibility of a negative value of experimentation.