Por favor, use este identificador para citar o enlazar este ítem:
https://hdl.handle.net/20.500.12008/51580
Cómo citar
Título: | Multi-task reinforcement learning in reproducing Kernel Hilbert spaces via cross-learning |
Autor: | Cerviño, Juan Bazerque, Juan Andrés Calvo-Fullana, Miguel Ribeiro, Alejandro |
Tipo: | Preprint |
Palabras clave: | Task analysis, Training data, Navigation, Convergence, Reinforcement learning, Optimization, Multi-task learning, Meta-learning |
Fecha de publicación: | 2021 |
Resumen: | Reinforcement learning is a framework to optimize an agent’s policy using rewards that are revealed by the system as a response to an action. In its standard form, reinforcement learning involves a single agent that uses its policy to accomplish a specific task. These methods require large amounts of reward samples to achieve good performance, and may not generalize well when the task is modified, even if the new task is related. In this paper we are interested in a collaborative scheme in which multiple policies are optimized jointly. To this end, we we introduce cross-learning , in which policies are trained for related tasks in separate environments, and they are constrained to be close to one another. Two properties make our new approach attractive: (i) it produces a multi-task central policy that can be used as a starting point to adapt quickly to one of the tasks trained for, and (ii) as in meta-learning , it adapts to environments related but different to those seen during training. We focus on policies belonging to reproducing kernel Hilbert spaces for which we bound the distance between the task-specific policies and the cross-learned policy. To solve the resulting optimization problem, we resort to a projected policy gradient algorithm and prove that it converges to a near-optimal solution with high probability. We evaluate our methodology with a navigation example in which an agent moves through environments with obstacles of multiple shapes and avoids obstacles not trained for. |
Financiadores: | ARL DCIST CRA W911NF-17-2-0181 Intel Science and Technology Center for Wireless Autonomous Systems |
Citación: | Cerviño, J., Bazerque, J., Calvo-Fullana, M. y otros. Multi-task reinforcement learning in reproducing Kernel Hilbert spaces via cross-learning [Preprint]. Publicado en: IEEE Transactions on Signal Processing, vol. 69, oct. 2021, pp. 5947-5962. DOI: 10.1109/TSP.2021.3122303. |
Departamento académico: | Sistemas y Control |
Licencia: | Licencia Creative Commons Atribución (CC - By 4.0) |
Aparece en las colecciones: | Publicaciones académicas y científicas - Instituto de Ingeniería Eléctrica |
Ficheros en este ítem:
Fichero | Descripción | Tamaño | Formato | ||
---|---|---|---|---|---|
CBCR21.pdf | Preprint | 876,44 kB | Adobe PDF | Visualizar/Abrir |
Este ítem está sujeto a una licencia Creative Commons Licencia Creative Commons