Por favor, use este identificador para citar o enlazar este ítem:
https://hdl.handle.net/20.500.12008/51580
Cómo citar
Registro completo de metadatos
Campo DC | Valor | Lengua/Idioma |
---|---|---|
dc.contributor.author | Cerviño, Juan | - |
dc.contributor.author | Bazerque, Juan Andrés | - |
dc.contributor.author | Calvo-Fullana, Miguel | - |
dc.contributor.author | Ribeiro, Alejandro | - |
dc.date.accessioned | 2025-09-11T18:01:35Z | - |
dc.date.available | 2025-09-11T18:01:35Z | - |
dc.date.issued | 2021 | - |
dc.identifier.citation | Cerviño, J., Bazerque, J., Calvo-Fullana, M. y otros. Multi-task reinforcement learning in reproducing Kernel Hilbert spaces via cross-learning [Preprint]. Publicado en: IEEE Transactions on Signal Processing, vol. 69, oct. 2021, pp. 5947-5962. DOI: 10.1109/TSP.2021.3122303. | es |
dc.identifier.uri | https://hdl.handle.net/20.500.12008/51580 | - |
dc.description.abstract | Reinforcement learning is a framework to optimize an agent’s policy using rewards that are revealed by the system as a response to an action. In its standard form, reinforcement learning involves a single agent that uses its policy to accomplish a specific task. These methods require large amounts of reward samples to achieve good performance, and may not generalize well when the task is modified, even if the new task is related. In this paper we are interested in a collaborative scheme in which multiple policies are optimized jointly. To this end, we we introduce cross-learning , in which policies are trained for related tasks in separate environments, and they are constrained to be close to one another. Two properties make our new approach attractive: (i) it produces a multi-task central policy that can be used as a starting point to adapt quickly to one of the tasks trained for, and (ii) as in meta-learning , it adapts to environments related but different to those seen during training. We focus on policies belonging to reproducing kernel Hilbert spaces for which we bound the distance between the task-specific policies and the cross-learned policy. To solve the resulting optimization problem, we resort to a projected policy gradient algorithm and prove that it converges to a near-optimal solution with high probability. We evaluate our methodology with a navigation example in which an agent moves through environments with obstacles of multiple shapes and avoids obstacles not trained for. | es |
dc.description.sponsorship | ARL DCIST CRA W911NF-17-2-0181 | es |
dc.description.sponsorship | Intel Science and Technology Center for Wireless Autonomous Systems | es |
dc.format.extent | 16 p. | es |
dc.format.mimetype | application/pdf | es |
dc.language.iso | en | es |
dc.rights | Las obras depositadas en el Repositorio se rigen por la Ordenanza de los Derechos de la Propiedad Intelectual de la Universidad de la República.(Res. Nº 91 de C.D.C. de 8/III/1994 – D.O. 7/IV/1994) y por la Ordenanza del Repositorio Abierto de la Universidad de la República (Res. Nº 16 de C.D.C. de 07/10/2014) | es |
dc.subject | Task analysis | es |
dc.subject | Training data | es |
dc.subject | Navigation | es |
dc.subject | Convergence | es |
dc.subject | Reinforcement learning | es |
dc.subject | Optimization | es |
dc.subject | Multi-task learning | es |
dc.subject | Meta-learning | es |
dc.title | Multi-task reinforcement learning in reproducing Kernel Hilbert spaces via cross-learning | es |
dc.type | Preprint | es |
dc.contributor.filiacion | Cerviño Juan, University of Pennsylvania, Philadelphia, USA | - |
dc.contributor.filiacion | Bazerque Juan Andrés, Universidad de la República (Uruguay). Facultad de Ingeniería. | - |
dc.contributor.filiacion | Calvo-Fullana Miguel, Massachusetts Institute of Technology, Boston, USA | - |
dc.contributor.filiacion | Ribeiro Alejandro, University of Pennsylvania, Philadelphia, USA | - |
dc.rights.licence | Licencia Creative Commons Atribución (CC - By 4.0) | es |
udelar.academic.department | Sistemas y Control | es |
Aparece en las colecciones: | Publicaciones académicas y científicas - Instituto de Ingeniería Eléctrica |
Ficheros en este ítem:
Fichero | Descripción | Tamaño | Formato | ||
---|---|---|---|---|---|
CBCR21.pdf | Preprint | 876,44 kB | Adobe PDF | Visualizar/Abrir |
Este ítem está sujeto a una licencia Creative Commons Licencia Creative Commons