Por favor, use este identificador para citar o enlazar este ítem:
https://hdl.handle.net/20.500.12008/51581
Cómo citar
Registro completo de metadatos
Campo DC | Valor | Lengua/Idioma |
---|---|---|
dc.contributor.author | Cerviño, Juan | - |
dc.contributor.author | Bazerque, Juan Andrés | - |
dc.contributor.author | Calvo-Fullana, Miguel | - |
dc.contributor.author | Ribeiro, Alejandro | - |
dc.date.accessioned | 2025-09-11T18:50:59Z | - |
dc.date.available | 2025-09-11T18:50:59Z | - |
dc.date.issued | 2021 | - |
dc.identifier.citation | Cerviño, J., Bazerque, J., Calvo-Fullana, M. y otros. Multi-task supervised learning via cross-learning [Preprint]. Publicado en: 2021 29th European Signal Processing Conference (EUSIPCO), Dublin, Ireland, 23-27 aug. 2021, pp. 1381-1385. | es |
dc.identifier.uri | https://hdl.handle.net/20.500.12008/51581 | - |
dc.description.abstract | In this paper we consider a problem known as multi-task learning, consisting of fitting a set of classifier or regression functions intended for solving different tasks. In our novel formulation, we couple the parameters of these functions, so that they learn in their task specific domains while staying close to each other. This facilitates cross-fertilization in which data collected across different domains help improving the learning performance at each other task. First, we present a simplified case in which the goal is to estimate the means of two Gaussian variables, for the purpose of gaining some insights on the advantage of the proposed cross-learning strategy. Then we provide a stochastic projected gradient algorithm to perform cross-learning over a generic loss function. If the number of parameters is large, then the projection step becomes computationally expensive. To avoid this situation, we derive a primal-dual algorithm that exploits the structure of the dual problem, achieving a formulation whose complexity only depends on the number of tasks. Preliminary numerical experiments for image classification by neural networks trained on a dataset divided in different domains corroborate that the cross-learned function outperforms both the task-specific and the consensus approaches | es |
dc.description.sponsorship | NSF-Simons MoDLTheorinet | es |
dc.description.sponsorship | ANII FSE 1-2019-1-157459 | es |
dc.format.extent | 5 p. | es |
dc.format.mimetype | application/pdf | es |
dc.language.iso | en | es |
dc.rights | Las obras depositadas en el Repositorio se rigen por la Ordenanza de los Derechos de la Propiedad Intelectual de la Universidad de la República.(Res. Nº 91 de C.D.C. de 8/III/1994 – D.O. 7/IV/1994) y por la Ordenanza del Repositorio Abierto de la Universidad de la República (Res. Nº 16 de C.D.C. de 07/10/2014) | es |
dc.subject | Supervised learning | es |
dc.subject | Multi-task learning | es |
dc.subject | Optimization | es |
dc.subject | Fitting | es |
dc.subject | Neural networks | es |
dc.subject | Signal processing algorithms | es |
dc.subject | Europe | es |
dc.subject | Signal processing | es |
dc.subject | Gaussian distribution | es |
dc.title | Multi-task supervised learning via cross-learning | es |
dc.type | Preprint | es |
dc.contributor.filiacion | Cerviño Juan, University of Pennsylvania, Philadelphia, USA | - |
dc.contributor.filiacion | Bazerque Juan Andrés, Universidad de la República (Uruguay). Facultad de Ingeniería. | - |
dc.contributor.filiacion | Calvo-Fullana Miguel, Massachusetts Institute of Technology, Boston, USA | - |
dc.contributor.filiacion | Ribeiro Alejandro, University of Pennsylvania, Philadelphia, USA | - |
dc.rights.licence | Licencia Creative Commons Atribución (CC - By 4.0) | es |
udelar.academic.department | Sistemas y Control | es |
Aparece en las colecciones: | Publicaciones académicas y científicas - Instituto de Ingeniería Eléctrica |
Ficheros en este ítem:
Fichero | Descripción | Tamaño | Formato | ||
---|---|---|---|---|---|
CBCR21a.pdf | Preprint | 2,43 MB | Adobe PDF | Visualizar/Abrir |
Este ítem está sujeto a una licencia Creative Commons Licencia Creative Commons