TY - JOUR
T1 - Interpretable Solutions for Stochastic Dynamic Programming
JF - bioRxiv
DO - 10.1101/2024.08.05.606713
SP - 2024.08.05.606713
AU - Ferrer-Mestres, Jonathan
AU - Dietterich, Thomas G.
AU - Buffet, Olivier
AU - Chadès, Iadine
Y1 - 2024/01/01
UR - http://biorxiv.org/content/early/2024/08/07/2024.08.05.606713.abstract
N2 - In conservation of biodiversity, natural resource management and behavioural ecology, stochastic dynamic programming, and its mathematical framework, Markov decision processes (MDPs), are used to inform sequential decision-making under uncertainty. Models and solutions of Markov decision problems should be interpretable to derive useful guidance for managers and applied ecologists. However, MDP solutions that have thousands of states are often difficult to understand. Difficult to interpret solutions are unlikely to be applied, and thus we are missing an opportunity to improve decision-making. One way of increasing interpretability is to decrease the number of states.Building on recent artificial intelligence advances, we introduce a novel approach to compute more compact representations of MDP models and solutions as an attempt at improving interpretability. This approach reduces the size of the number of states to a maximum number K while minimising the loss of performance compared to the original larger number of states. The reduced MDP is called a K-MDP. We present an algorithm to compute K-MDPs and assess its performance on three case studies of increasing complexity from the literature. We provide the code as a MATLAB package along with a set of illustrative problems.We found that K-MDPs can achieve a substantial reduction of the number of states with a small loss of performance for all case studies. For example, for a conservation problem involving Northern Abalone and Sea Otters, we reduce the number of states from 819 to 5 states while incurring a loss of performance of only 1%. For a dynamic reserve selection problem with seven dimensions, while an impressive reduction in the number of states was achieved, interpreting the optimal solutions remained challenging.Modelling problems as Markov decision processes requires experience. While several models may represent the same problem, reducing the number of states is likely to make solutions and models more interpretable and facilitate the extraction of meaningful recommendations. We hope that this approach will contribute to the uptake of stochastic dynamic programming applications and stimulate further research to increase interpretability of stochastic dynamic programming solutions.Competing Interest StatementThe authors have declared no competing interest.
ER -