english Icono del idioma   español Icono del idioma  

Por favor, use este identificador para citar o enlazar este ítem: https://hdl.handle.net/20.500.12008/51581 Cómo citar
Registro completo de metadatos
Campo DC Valor Lengua/Idioma
dc.contributor.authorCerviño, Juan-
dc.contributor.authorBazerque, Juan Andrés-
dc.contributor.authorCalvo-Fullana, Miguel-
dc.contributor.authorRibeiro, Alejandro-
dc.date.accessioned2025-09-11T18:50:59Z-
dc.date.available2025-09-11T18:50:59Z-
dc.date.issued2021-
dc.identifier.citationCerviño, J., Bazerque, J., Calvo-Fullana, M. y otros. Multi-task supervised learning via cross-learning [Preprint]. Publicado en: 2021 29th European Signal Processing Conference (EUSIPCO), Dublin, Ireland, 23-27 aug. 2021, pp. 1381-1385.es
dc.identifier.urihttps://hdl.handle.net/20.500.12008/51581-
dc.description.abstractIn this paper we consider a problem known as multi-task learning, consisting of fitting a set of classifier or regression functions intended for solving different tasks. In our novel formulation, we couple the parameters of these functions, so that they learn in their task specific domains while staying close to each other. This facilitates cross-fertilization in which data collected across different domains help improving the learning performance at each other task. First, we present a simplified case in which the goal is to estimate the means of two Gaussian variables, for the purpose of gaining some insights on the advantage of the proposed cross-learning strategy. Then we provide a stochastic projected gradient algorithm to perform cross-learning over a generic loss function. If the number of parameters is large, then the projection step becomes computationally expensive. To avoid this situation, we derive a primal-dual algorithm that exploits the structure of the dual problem, achieving a formulation whose complexity only depends on the number of tasks. Preliminary numerical experiments for image classification by neural networks trained on a dataset divided in different domains corroborate that the cross-learned function outperforms both the task-specific and the consensus approacheses
dc.description.sponsorshipNSF-Simons MoDLTheorinetes
dc.description.sponsorshipANII FSE 1-2019-1-157459es
dc.format.extent5 p.es
dc.format.mimetypeapplication/pdfes
dc.language.isoenes
dc.rightsLas obras depositadas en el Repositorio se rigen por la Ordenanza de los Derechos de la Propiedad Intelectual de la Universidad de la República.(Res. Nº 91 de C.D.C. de 8/III/1994 – D.O. 7/IV/1994) y por la Ordenanza del Repositorio Abierto de la Universidad de la República (Res. Nº 16 de C.D.C. de 07/10/2014)es
dc.subjectSupervised learninges
dc.subjectMulti-task learninges
dc.subjectOptimizationes
dc.subjectFittinges
dc.subjectNeural networkses
dc.subjectSignal processing algorithmses
dc.subjectEuropees
dc.subjectSignal processinges
dc.subjectGaussian distributiones
dc.titleMulti-task supervised learning via cross-learninges
dc.typePreprintes
dc.contributor.filiacionCerviño Juan, University of Pennsylvania, Philadelphia, USA-
dc.contributor.filiacionBazerque Juan Andrés, Universidad de la República (Uruguay). Facultad de Ingeniería.-
dc.contributor.filiacionCalvo-Fullana Miguel, Massachusetts Institute of Technology, Boston, USA-
dc.contributor.filiacionRibeiro Alejandro, University of Pennsylvania, Philadelphia, USA-
dc.rights.licenceLicencia Creative Commons Atribución (CC - By 4.0)es
udelar.academic.departmentSistemas y Controles
Aparece en las colecciones: Publicaciones académicas y científicas - Instituto de Ingeniería Eléctrica

Ficheros en este ítem:
Fichero Descripción Tamaño Formato   
CBCR21a.pdfPreprint2,43 MBAdobe PDFVisualizar/Abrir


Este ítem está sujeto a una licencia Creative Commons Licencia Creative Commons Creative Commons