TITLE: Evaluating Critical Reinforcement Learning Framework In the Field ABSTRACT: Reinforcement Learning (RL) is learning what action to take next by mapping situations to actions so as to maximize cumulative rewards. In recent years RL has achieved great success in inducing effective pedagogical policies for various interactive e-learning environments. However, it is often prohibitive to identify the critical pedagogical decisions that actually contribute to desirable learning outcomes. In this work, by utilizing the RL framework we defined critical decisions to be those states in which the agent has to take the optimal actions, and subsequently, the Critical policy as carrying out optimal actions in the critical states while acting randomly in others. We proposed a general Critical-RL framework for identifying critical decisions and inducing a Critical policy. The effectiveness of our Critical-RL framework is empirically evaluated from two perspectives: whether optimal actions must be carried out in critical states (the necessary hypothesis) and whether only carrying out optimal actions in critical states is as effective as a fully-executed RL policy (the sufficient hypothesis). Our results confirmed both hypotheses. AUTHORS: Song Ju, Guojing Zhou, Mark Abdelshiheed, Tiffany Barnes, Min Chi NOTE: Presented in the workshop as part of the ENCORE track. This paper is from AIED 2021 conference. The paper can be accessed at the following link: https://link.springer.com/chapter/10.1007/978-3-030-78292-4_18