english Icono del idioma   español Icono del idioma  

Por favor, use este identificador para citar o enlazar este ítem: https://hdl.handle.net/20.500.12008/43547 Cómo citar
Registro completo de metadatos
Campo DC Valor Lengua/Idioma
dc.contributor.authorLezama, Josées
dc.contributor.authorQiu, Qianges
dc.contributor.authorMusé, Pabloes
dc.contributor.authorSapiro, Guillermoes
dc.date.accessioned2024-04-16T16:21:21Z-
dc.date.available2024-04-16T16:21:21Z-
dc.date.issued2018es
dc.date.submitted20240416es
dc.identifier.citationLezama, J.Qiu, Q, Muse, P, Sapiro, G. "OLE : orthogonal low-rank embedding, a plug and play geometric loss for deep learning" Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 2018 pp. 8109-8118. doi: 10.1109/CVPR.2018.00846es
dc.identifier.urihttps://hdl.handle.net/20.500.12008/43547-
dc.description.abstractDeep neural networks trained using a softmax layer at the top and the cross-entropy loss are ubiquitous tools for image classification. Yet, this does not naturally enforce intra-class similarity nor inter-class margin of the learned deep representations. To simultaneously achieve these two goals, different solutions have been proposed in the literature, such as the pairwise or triplet losses. However, these carry the extra task of selecting pairs or triplets, and the extra computational burden of computing and learning for many combinations of them. In this paper, we propose a plug-and-play loss term for deep networks that explicitly reduces intra-class variance and enforces inter-class margin simultaneously, in a simple and elegant geometric manner. For each class, the deep features are collapsed into a learned linear subspace, or union of them, and inter-class subspaces are pushed to be as orthogonal as possible. Our proposed Orthogonal Low-rank Embedding (OLE´) does not require carefully crafting pairs or triplets of samples for training, and works standalone as a classification loss, being the first reported deep metric learning framework of its kind. Because of the improved margin between features of different classes, the resulting deep networks generalize better, are more discriminative, and more robust. We demonstrate improved classification performance in general object recognition, plugging the proposed loss term into existing off-the-shelf architectures. In particular, we show the advantage of the proposed loss in the small data/model scenario, and we significantly advance the state-of-the-art on the Stanford STL-10 benchmark.es
dc.languageenes
dc.publisherCVFes
dc.relation.ispartofConference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 2018es
dc.rightsLas obras depositadas en el Repositorio se rigen por la Ordenanza de los Derechos de la Propiedad Intelectual de la Universidad De La República. (Res. Nº 91 de C.D.C. de 8/III/1994 – D.O. 7/IV/1994) y por la Ordenanza del Repositorio Abierto de la Universidad de la República (Res. Nº 16 de C.D.C. de 07/10/2014)es
dc.subjectNeural networkses
dc.subject.otherProcesamiento de Señaleses
dc.titleOLE : orthogonal low-rank embedding, a plug and play geometric loss for deep learninges
dc.typePonenciaes
dc.rights.licenceLicencia Creative Commons Atribución - No Comercial - Sin Derivadas (CC - By-NC-ND 4.0)es
udelar.academic.departmentProcesamiento de Señales-
udelar.investigation.groupTratamiento de Imágenes-
Aparece en las colecciones: Publicaciones académicas y científicas - Instituto de Ingeniería Eléctrica

Ficheros en este ítem:
Fichero Descripción Tamaño Formato   
LQMS18.pdf1,82 MBAdobe PDFVisualizar/Abrir


Este ítem está sujeto a una licencia Creative Commons Licencia Creative Commons Creative Commons