|| Learning to Detect an Odd Restless Markov Arm
||P. N. Karthik, Rajesh Sundaresan, Indian Institute of Science, India|
||D3-S7-T3: Topics in Learning II
||Thursday, 15 July, 00:00 - 00:20
||Thursday, 15 July, 00:20 - 00:40
This paper studies the problem of identifying an anomalous arm in a multi-armed bandit when each arm is a finite-state Markov process and the arms are restless. Here, anomaly means that the transition probability matrix (TPM) of one of the arms (the odd arm) is different from the common TPM of each of the non-odd arms. The TPMs are unknown to a decision entity that wishes to find the index of the odd arm as quickly as possible, subject to an upper bound on the error probability. We derive an asymptotic lower bound on the expected time required to find the odd arm index, where the asymptotics is as the error probability vanishes. Further, we devise a policy based on the principle of certainty equivalence, and demonstrate that under a continuous selection assumption and a regularity assumption on the TPMs, the policy achieves the lower bound asymptotically. Our achievability analysis is based on resolving the identifiability problem in the context of a certain countable-state controlled Markov process.