Abstract: Markov Decision Process (MDP) is the most common planning framework in the literature for sequential decisions under probabilistic outcomes; MDPs also underlies the Reinforcement Learning (RL) theory. Computerized Adaptive Testing (CAT) is an assessment approach that selects questions one after another while conditioning each selection on the previous questions and answers. While MDP defines a well-posed optimization planning-problem, shortsighted score functions have solved the planning problem in CATs. Here, we show how MDP can model different formalisms for CAT, and, therefore, why the CAT community may benefit from MDP algorithms and theory. We also apply an MDP algorithm to solve a CAT, and we compare it against traditional score functions from CAT literature.
0 Replies
Loading