english Icono del idioma   español Icono del idioma  

Por favor, use este identificador para citar o enlazar este ítem: https://hdl.handle.net/20.500.12008/38723 Cómo citar
Registro completo de metadatos
Campo DC Valor Lengua/Idioma
dc.contributor.authorMateos, Gonzaloes
dc.contributor.authorBazerque, Juan Andréses
dc.contributor.authorGiannakis, Georgios Bes
dc.date.accessioned2023-08-01T20:33:30Z-
dc.date.available2023-08-01T20:33:30Z-
dc.date.issued2010es
dc.date.submitted20230801es
dc.identifier.citationMateos, G, Bazerque, J, Giannakis, G. “Distributed sparse linear regression” IEEE Transactions on Signal Processing, 2010, v. 58, no.10.es
dc.identifier.urihttps://hdl.handle.net/20.500.12008/38723-
dc.description.abstractThe Lasso is a popular technique for joint estimation and continuous variable selection, especially well-suited for sparse and possibly under-determined linear regression problems. This paper develops algorithms to estimate the regression coefficients via Lasso when the training data are distributed across different agents, and their communication to a central processing unit is prohibited for e.g., communication cost or privacy reasons. A motivating application is explored in the context of wireless communications, whereby sensing cognitive radios collaborate to estimate the radio-frequency power spectrum density. Attaining different tradeoffs between complexity and convergence speed, three novel algorithms are obtained after reformulating the Lasso into a separable form, which is iteratively minimized using the alternating-direction method of multipliers so as to gain the desired degree of parallelization. Interestingly, the per agent estimate updates are given by simple soft-thresholding operations, and inter-agent communication overhead remains at affordable level. Without exchanging elements from the different training sets, the local estimates consent to the global Lasso solution, i.e., the fit that would be obtained if the entire data set were centrally available. Numerical experiments with both simulated and real data demonstrate the merits of the proposed distributed schemes, corroborating their convergence and global optimality. The ideas in this paper can be easily extended for the purpose of fitting related models in a distributed fashion, including the adaptive Lasso, elastic net, fused Lasso and nonnegative garrotees
dc.languageenes
dc.rightsLas obras depositadas en el Repositorio se rigen por la Ordenanza de los Derechos de la Propiedad Intelectual de la Universidad De La República. (Res. Nº 91 de C.D.C. de 8/III/1994 – D.O. 7/IV/1994) y por la Ordenanza del Repositorio Abierto de la Universidad de la República (Res. Nº 16 de C.D.C. de 07/10/2014)es
dc.subjectDistributed linear regressiones
dc.subjectLassoes
dc.subjectParallel optimizationes
dc.subjectSparse estimationes
dc.subject.otherSistemas y Controles
dc.titleDistributed sparse linear regressiones
dc.typeArtículoes
dc.rights.licenceLicencia Creative Commons Atribución - No Comercial - Sin Derivadas (CC - By-NC-ND 4.0)es
Aparece en las colecciones: Publicaciones académicas y científicas - Instituto de Ingeniería Eléctrica

Ficheros en este ítem:
No hay ficheros asociados a este ítem.


Este ítem está sujeto a una licencia Creative Commons Licencia Creative Commons Creative Commons