Decision making, the ability to choose actions under varied circumstances, is an integral skill for autonomous agents. This tutorial focuses on problems where agents face a sequence of such decisions, and where the consequences of their actions are uncertain. Formally, such problems require careful planning to optimize objectives given a mathematical model of the agents’ environment commonly referred to as a Markov Decision Process (MDP). The tutorial will give an introduction to the MDP model and some of its standard solution methods. Subsequently, it will present partiallyobservable MDPs (POMDPs) as a means of extending the decision making problem to deal with noisy and imperfect sensors. As agents may need to work together with other agents to achieve a common objective, particular attention will be devoted to multiagent decision making models and methods. This tutorial will also include a practical exercise that follows the steps involved in applying these abstract mathematical models to decision making problems involving physical agents. Students will have the opportunity to use opensource software tools to model and solve a multirobot decisionmaking problem through MDPbased methods.