english Icono del idioma   español Icono del idioma  

Por favor, use este identificador para citar o enlazar este ítem: https://hdl.handle.net/20.500.12008/34000 Cómo citar
Título: Data efficient deep learning models for text classification
Autor: Garreta, Raúl
Título Obtenido: Magíster en Informática
Facultad o Servicio que otorga el Título: Universidad de la República (Uruguay). Facultad de Ingeniería
Tutor: Moncecchi, Guillermo
Wonsever, Dina
Tipo: Tesis de maestría
Palabras clave: Text classification, Natural language processing, Sentiment analysis, Deep learning, Transfer learning
Fecha de publicación: 2020
Resumen: Text classification is one of the most important techniques within natural language processing. Applications range from topic detection and intent identification to sentiment analysis. Usually text classification is formulated as a supervised learning problem, where a labeled training set is fed into a machine learning algorithm. In practice, training supervised machine learning algorithms such as those present in deep learning, require large training sets which involves a considerable amount of human labor to manually tag the data. This constitutes a bottleneck in applied supervised learning, and as a result, it is desired to have supervised learning models that require smaller amounts of tagged data. In this work, we will research and compare supervised learning models for text classification that are data efficient, that is, require small amounts of tagged data to achieve state of the art performance levels. In particular, we will study transfer learning techniques that reuse previous knowledge to train supervised learning models. For the purpose of comparison, we will focus on opinion polarity classification, a sub problem within sentiment analysis that assigns polarity to an opinion (positive or negative) depending on the mood of the opinion holder. Multiple deep learning models to learn representations of texts including BERT, InferSent, Universal Sentence Encoder and the Sentiment Neuron are compared in six datasets from different domains. Results show that transfer learning dramatically improves data efficiency, obtaining double digit improvements in F1 score just with under 100 supervised training examples.
Editorial: Udelar. FI.
ISSN: 1688-2792
Citación: Garreta, R. Data efficient deep learning models for text classification [en línea] Tesis de maestría. Montevideo : Udelar. FI. INCO : PEDECIBA. Área Informática, 2020.
Licencia: Licencia Creative Commons Atribución - No Comercial - Sin Derivadas (CC - By-NC-ND 4.0)
Aparece en las colecciones: Tesis de posgrado - Instituto de Computación

Ficheros en este ítem:
Fichero Descripción Tamaño Formato   
Gar20.pdfTesis de Maestría5,27 MBAdobe PDFVisualizar/Abrir


Este ítem está sujeto a una licencia Creative Commons Licencia Creative Commons Creative Commons