ISSN:
1432-0541
Keywords:
Key words. Bandit problem, Stochastic adaptive control, Optimal strategy, Nonparametric estimation, Stochastic approximation.
Source:
Springer Online Journal Archives 1860-2000
Topics:
Computer Science
,
Mathematics
Notes:
Abstract. The bandit problem consists of two factors, one being exploration or the collection of information on the environment and the other being the exploitation or taking benefit by choosing the optimal action in the uncertain environment. It is desirable to choose only the optimal action for the exploitation, while the exploration or collection of information requires taking a variety of (nonoptimal) actions as trials. Hence, in order to obtain the maximal cumulative gain, we need to compromise between the exploration and exploitation processes. We treat a situation where our actions change the structure of the environment, of which a simple example is formulated as the lob—pass problem by Abe and Takeuchi. Usually, the environment is specified by a finite number of unknown parameters in the bandit problem, so that the information collection part is to estimate their true values. This paper treats a more realistic situation of nonparametric estimation of the environment structure which includes an infinite number (a functional degree) of unknown parameters. A strategy is given under such a circumstance, proving that the cumulative regret can be made of the order O(log t) , O((log t) 2 ) , or O(t 1-σ ) (0〈 σ 〈1) depending on the dynamics of the environment, where t is the number of trials, in contrast with the optimal order O(log t) in the parametric case.
Type of Medium:
Electronic Resource
URL:
http://dx.doi.org/10.1007/PL00013826
Permalink