english Icono del idioma   español Icono del idioma  

Por favor, use este identificador para citar o enlazar este ítem: https://hdl.handle.net/20.500.12008/51580 Cómo citar
Registro completo de metadatos
Campo DC Valor Lengua/Idioma
dc.contributor.authorCerviño, Juan-
dc.contributor.authorBazerque, Juan Andrés-
dc.contributor.authorCalvo-Fullana, Miguel-
dc.contributor.authorRibeiro, Alejandro-
dc.date.accessioned2025-09-11T18:01:35Z-
dc.date.available2025-09-11T18:01:35Z-
dc.date.issued2021-
dc.identifier.citationCerviño, J., Bazerque, J., Calvo-Fullana, M. y otros. Multi-task reinforcement learning in reproducing Kernel Hilbert spaces via cross-learning [Preprint]. Publicado en: IEEE Transactions on Signal Processing, vol. 69, oct. 2021, pp. 5947-5962. DOI: 10.1109/TSP.2021.3122303.es
dc.identifier.urihttps://hdl.handle.net/20.500.12008/51580-
dc.description.abstractReinforcement learning is a framework to optimize an agent’s policy using rewards that are revealed by the system as a response to an action. In its standard form, reinforcement learning involves a single agent that uses its policy to accomplish a specific task. These methods require large amounts of reward samples to achieve good performance, and may not generalize well when the task is modified, even if the new task is related. In this paper we are interested in a collaborative scheme in which multiple policies are optimized jointly. To this end, we we introduce cross-learning , in which policies are trained for related tasks in separate environments, and they are constrained to be close to one another. Two properties make our new approach attractive: (i) it produces a multi-task central policy that can be used as a starting point to adapt quickly to one of the tasks trained for, and (ii) as in meta-learning , it adapts to environments related but different to those seen during training. We focus on policies belonging to reproducing kernel Hilbert spaces for which we bound the distance between the task-specific policies and the cross-learned policy. To solve the resulting optimization problem, we resort to a projected policy gradient algorithm and prove that it converges to a near-optimal solution with high probability. We evaluate our methodology with a navigation example in which an agent moves through environments with obstacles of multiple shapes and avoids obstacles not trained for.es
dc.description.sponsorshipARL DCIST CRA W911NF-17-2-0181es
dc.description.sponsorshipIntel Science and Technology Center for Wireless Autonomous Systemses
dc.format.extent16 p.es
dc.format.mimetypeapplication/pdfes
dc.language.isoenes
dc.rightsLas obras depositadas en el Repositorio se rigen por la Ordenanza de los Derechos de la Propiedad Intelectual de la Universidad de la República.(Res. Nº 91 de C.D.C. de 8/III/1994 – D.O. 7/IV/1994) y por la Ordenanza del Repositorio Abierto de la Universidad de la República (Res. Nº 16 de C.D.C. de 07/10/2014)es
dc.subjectTask analysises
dc.subjectTraining dataes
dc.subjectNavigationes
dc.subjectConvergencees
dc.subjectReinforcement learninges
dc.subjectOptimizationes
dc.subjectMulti-task learninges
dc.subjectMeta-learninges
dc.titleMulti-task reinforcement learning in reproducing Kernel Hilbert spaces via cross-learninges
dc.typePreprintes
dc.contributor.filiacionCerviño Juan, University of Pennsylvania, Philadelphia, USA-
dc.contributor.filiacionBazerque Juan Andrés, Universidad de la República (Uruguay). Facultad de Ingeniería.-
dc.contributor.filiacionCalvo-Fullana Miguel, Massachusetts Institute of Technology, Boston, USA-
dc.contributor.filiacionRibeiro Alejandro, University of Pennsylvania, Philadelphia, USA-
dc.rights.licenceLicencia Creative Commons Atribución (CC - By 4.0)es
udelar.academic.departmentSistemas y Controles
Aparece en las colecciones: Publicaciones académicas y científicas - Instituto de Ingeniería Eléctrica

Ficheros en este ítem:
Fichero Descripción Tamaño Formato   
CBCR21.pdfPreprint876,44 kBAdobe PDFVisualizar/Abrir


Este ítem está sujeto a una licencia Creative Commons Licencia Creative Commons Creative Commons