A Hierarchical Two-tier Approach to Hyper-parameter Optimization in Reinforcement Learning

  • Juan Cruz Barsce Dpto. de Ingeniería en Sistemas de Información, Facultad Regional Villa María, UTN
  • Jorge Palombarini Dpto. de Ingeniería en Sistemas de Información, Facultad Regional Villa María, UTN - GISIQ, Facultad Regional Villa María, UTN - CIT Villa María - CONICET - UNVM
  • Ernesto Martinez Instituto de Desarrollo y Diseño CONICET-UTN

Resumen

Optimization of hyper-parameters in real-world applications of reinforcement learning (RL) is a key issue, because their settings determine how fast the agent will learn its policy by interacting with its environment due to the information content of data gathered. In this work, an approach that uses Bayesian optimization to perform an autonomous two-tier optimization of both representation decisions and algorithm hyper-parameters is proposed: first, categorical / structural RL hyper-parameters are taken as binary variables and optimized with an acquisition function tailored for such type of variables. Then, at a lower level of abstraction, solution-level hyper-parameters are optimized by resorting to the expected improvement acquisition function, whereas the categorical hyper-parameters found in the optimization at the upper level of abstraction are fixed. This two-tier approach is validated with a tabular and neural network setting of the value function, in a classic simulated control task. Results obtained are promising and open the way for more user-independent applications of reinforcement learning.

Publicado
2020-05-18
Cómo citar
Barsce, J., Palombarini, J., & Martinez, E. (2020). A Hierarchical Two-tier Approach to Hyper-parameter Optimization in Reinforcement Learning. Electronic Journal of SADIO (EJS), 19(2), 2-27. Recuperado a partir de https://ojs.sadio.org.ar/index.php/EJS/article/view/165