english Icono del idioma   español Icono del idioma  

Por favor, use este identificador para citar o enlazar este ítem: https://hdl.handle.net/20.500.12008/26962 Cómo citar
Registro completo de metadatos
Campo DC Valor Lengua/Idioma
dc.contributor.authorZinemanas, Pablo-
dc.contributor.authorRocamora, Martín-
dc.contributor.authorMiron, Marius-
dc.contributor.authorFont, Frederic-
dc.contributor.authorSerra, Xavier-
dc.date.accessioned2021-04-06T16:36:10Z-
dc.date.available2021-04-06T16:36:10Z-
dc.date.issued2021-
dc.identifier.citationZinemanas, P., Rocamora, M., Miron, M. y otros. "An interpretable deep learning model for automatic sound classification". Electronics. [en línea]. 2021, vol. 10, no 7, pp. 1-23. DOI: 10.3390/electronics10070850es
dc.identifier.urihttps://hdl.handle.net/20.500.12008/26962-
dc.description.abstractDeep learning models have improved cutting-edge technologies in many research areas, but their black-box structure makes it difficult to understand their inner workings and the rationale behind their predictions. This may lead to unintended effects, such as being susceptible to adversarial attacks or the reinforcement of biases. There is still a lack of research in the audio domain, despite the increasing interest in developing deep learning models that provide explanations of their decisions. To reduce this gap, we propose a novel interpretable deep learning model for automatic sound classification, which explains its predictions based on the similarity of the input to a set of learned prototypes in a latent space. We leverage domain knowledge by designing a frequency-dependent similarity measure and by considering different time-frequency resolutions in the feature space. The proposed model achieves results that are comparable to that of the state-of-the-art methods in three different sound classification tasks involving speech, music, and environmental audio. In addition, we present two automatic methods to prune the proposed model that exploit its interpretability. Our system is open source and it is accompanied by a web application for the manual editing of the model, which allows for a human-in-the-loop debugging approach.en
dc.format.extent23 p.es
dc.format.mimetypeapplication/pdfes
dc.language.isoenes
dc.publisherMDPIes
dc.relation.ispartofElectronics, vol. 10, no 7, pp. 1-23, apr 2021es
dc.rightsLas obras depositadas en el Repositorio se rigen por la Ordenanza de los Derechos de la Propiedad Intelectual de la Universidad de la República.(Res. Nº 91 de C.D.C. de 8/III/1994 – D.O. 7/IV/1994) y por la Ordenanza del Repositorio Abierto de la Universidad de la República (Res. Nº 16 de C.D.C. de 07/10/2014)es
dc.subjectInterpretabilityen
dc.subjectExplainabilityen
dc.subjectDeep learningen
dc.subjectSound classificationen
dc.subjectPrototypesen
dc.titleAn interpretable deep learning model for automatic sound classification.en
dc.typeArtículoes
dc.contributor.filiacionZinemanas Pablo, Universitat Pompeu Fabra-
dc.contributor.filiacionRocamora Martín, Universidad de la República (Uruguay). Facultad de Ingeniería.-
dc.contributor.filiacionMiron Marius, Universitat Pompeu Fabra-
dc.contributor.filiacionFont Frederic, Universitat Pompeu Fabra-
dc.contributor.filiacionSerra Xavier, Universitat Pompeu Fabra-
dc.rights.licenceLicencia Creative Commons Atribución (CC - By 4.0)es
dc.identifier.doi10.3390/electronics10070850-
dc.identifier.eissn2079-9292-
Aparece en las colecciones: Publicaciones académicas y científicas - Instituto de Ingeniería Eléctrica

Ficheros en este ítem:
Fichero Descripción Tamaño Formato   
ZRMFS21.pdfVersión publicada2,72 MBAdobe PDFVisualizar/Abrir


Este ítem está sujeto a una licencia Creative Commons Licencia Creative Commons Creative Commons