| Agents in a multi-agent system observe the environment and take actions based on their strategies. Without prior knowledge of the environment, agents need to learn to act using learning techniques. Reinforcement learning can be used for agents to learn their desired strategies by interaction with the environment. This thesis focuses on the study of multi-agent reinforcement learning in games. In this thesis, we investigate how reinforcement learning algorithms can be applied to different types of games.;We provide four main contributions in this thesis. First, we convert Isaacs' guarding a territory game to a gird game of guarding a territory under the framework of stochastic games. We apply two reinforcement learning algorithms to the grid game and compare them through simulation results. Second, we design a decentralized learning algorithm called the LR–I lagging anchor algorithm and prove the convergence of this algorithm to Nash equilibria in two-player two-action general-sum matrix games. We then provide empirical results of this algorithm to more general stochastic games. Third, we apply the potential-based shaping method to multi-player general-sum stochastic games and prove the policy invariance under reward transformations in general-sum stochastic games. Fourth, we apply fuzzy reinforcement learning to Isaacs' differential game of guarding a territory. A potential-base shaping function is introduced to help the defenders improve the learning performance in both the two-player and the three-player differential game of guarding a territory. |