<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <title>Colibri Colección :</title>
  <link rel="alternate" href="https://hdl.handle.net/20.500.12008/48532" />
  <subtitle />
  <id>https://hdl.handle.net/20.500.12008/48532</id>
  <updated>2026-04-23T11:17:13Z</updated>
  <dc:date>2026-04-23T11:17:13Z</dc:date>
  <entry>
    <title>Error detection and correction for English learners using neural models</title>
    <link rel="alternate" href="https://hdl.handle.net/20.500.12008/54243" />
    <author>
      <name>Dai, Zuoheng</name>
    </author>
    <author>
      <name>Manitto, Martín</name>
    </author>
    <author>
      <name>Chiruzzo, Luis</name>
    </author>
    <author>
      <name>Rosá, Aiala</name>
    </author>
    <id>https://hdl.handle.net/20.500.12008/54243</id>
    <updated>2026-04-07T17:34:11Z</updated>
    <published>2025-01-01T00:00:00Z</published>
    <summary type="text">Título: Error detection and correction for English learners using neural models
Autor: Dai, Zuoheng; Manitto, Martín; Chiruzzo, Luis; Rosá, Aiala
Resumen: In recent years, the importance of English language learning in Uruguay has been steadily growing. However, the country faces a significant challenge: a shortage of teachers, which makes it difficult for students to achieve an adequate English level. This problem particularly affects children at the initial levels, limiting their progress in language learning. In this context, it is crucial to assess student progress in order to define educational policies. This project builds on the work of previous initiatives that seek to support teachers in correcting exercises written by students. The central purpose of the project is to develop a tool that facilitates the automatic correction of texts written by English learners at the initial stages. The tool must receive the texts and return a corrected version, highlighting the identified errors. Initially, we explored the exclusive use of Large Language Models (LLMs) to address this problem. However,after several experiments, the results were not satisfactory. Given&#xD;
this limitation, we opted for an alternative solution that combines the use of LLMs, a module for detecting differences, and a classifier trained to predict error types, achieving the same correction objective, developing a system composed of three independent modules. We obtained an error corrector based on the Mistral LLM with an F0.5 score of 0.81, and an error classifier with an F1 score of 0.76. Overall, the system achieved an F0.5 performance of 0.68.</summary>
    <dc:date>2025-01-01T00:00:00Z</dc:date>
  </entry>
  <entry>
    <title>Modelar : Modelado del desempeño de métodos numéricos en plataformas de hardware heterogéneas</title>
    <link rel="alternate" href="https://hdl.handle.net/20.500.12008/53879" />
    <author>
      <name>Dufrechou, Ernesto</name>
    </author>
    <author>
      <name>Favaro, Federico</name>
    </author>
    <id>https://hdl.handle.net/20.500.12008/53879</id>
    <updated>2026-03-13T17:17:02Z</updated>
    <published>2025-01-01T00:00:00Z</published>
    <summary type="text">Título: Modelar : Modelado del desempeño de métodos numéricos en plataformas de hardware heterogéneas
Autor: Dufrechou, Ernesto; Favaro, Federico
Descripción: El video lo realizó el Área de Comunicación de la Facultad de Ingeniería.</summary>
    <dc:date>2025-01-01T00:00:00Z</dc:date>
  </entry>
  <entry>
    <title>A synchronization-free incomplete LU factorization for GPUs with level-set analysis</title>
    <link rel="alternate" href="https://hdl.handle.net/20.500.12008/53706" />
    <author>
      <name>Freire, Manuel</name>
    </author>
    <author>
      <name>Dufrechou, Ernesto</name>
    </author>
    <author>
      <name>Ezzatti, Pablo</name>
    </author>
    <id>https://hdl.handle.net/20.500.12008/53706</id>
    <updated>2026-03-04T15:49:12Z</updated>
    <published>2025-01-01T00:00:00Z</published>
    <summary type="text">Título: A synchronization-free incomplete LU factorization for GPUs with level-set analysis
Autor: Freire, Manuel; Dufrechou, Ernesto; Ezzatti, Pablo
Resumen: Incomplete factorization methods are powerful algebraic preconditioners widely used to accelerate the convergence of linear solvers. The parallelization of ILU methods has been extensively studied, particularly for GPUs, which are ubiquitous parallel computing devices. In recent years, synchronizationfree methods have become the mainstream approach for solving sparse triangular linear systems.&#xD;
Although the sparse triangular solver and ILU factorization are closely related, the application of synchronization-free strategies to ILU factorization has not been explored in the literature to the same extent as the triangular solver. In this work, we present synchronization-free implementations of the ILU-0 preconditioner on GPUs. Specifically, we propose three implementations that vary in how row updates are handled after&#xD;
each coefficient elimination, as well as an additional approach that leverages a prior level-set analysis to optimize the execution schedule decomposition, which computes the full factorization of A, ILU only performs an incomplete factorization by discarding certain fill-ins that would otherwise appear in L and U. This&#xD;
approach preserves sparsity in the factors, helping to control memory usage and computational costs.&#xD;
ILU is a widely used algebraic preconditioner and is often chosen when no further information about the problem is available. However, ILU factorizations can be computationally expensive, especially for large sparse matrices, partly because ILU parallelism is limited by serial dependencies in the Gaussian elimination sequence. To address this, various efforts have been made to parallelize the ILU on GPUs, including approaches based on level-set analysis [3], [4], graph-coloring [5], and iterative methods [6], [7].</summary>
    <dc:date>2025-01-01T00:00:00Z</dc:date>
  </entry>
  <entry>
    <title>Optimizing the performance of SPMV kernel in FPGA guided by the Roofline model</title>
    <link rel="alternate" href="https://hdl.handle.net/20.500.12008/53701" />
    <author>
      <name>Favaro, Federico</name>
    </author>
    <author>
      <name>Dufrechou, Ernesto</name>
    </author>
    <author>
      <name>Oliver, Juan P.</name>
    </author>
    <author>
      <name>Ezzatti, Pablo</name>
    </author>
    <id>https://hdl.handle.net/20.500.12008/53701</id>
    <updated>2026-03-04T15:44:45Z</updated>
    <published>2023-01-01T00:00:00Z</published>
    <summary type="text">Título: Optimizing the performance of SPMV kernel in FPGA guided by the Roofline model
Autor: Favaro, Federico; Dufrechou, Ernesto; Oliver, Juan P.; Ezzatti, Pablo
Resumen: The widespread adoption of massively parallel processors over the past decade has fundamentally transformed the landscape of high-performance computing hardware. This revolution has recently driven the advancement of FPGAs, which are emerging as an attractive alternative to power-hungry many-core devices in a world increasingly concerned with energy consumption.&#xD;
Consequently, numerous recent studies have focused on implementing efficient dense and sparse&#xD;
numerical linear algebra (NLA) kernels on FPGAs. To maximize the efficiency of these kernels, a key&#xD;
aspect is the exploration of analytical tools to comprehend the performance of the developments and&#xD;
guide the optimization process. In this regard, the roofline model (RLM) is a well-known graphical&#xD;
tool that facilitates the analysis of computational performance and identifies the primary bottlenecks&#xD;
of a specific software when executed on a particular hardware platform. Our previous efforts advanced in developing efficient implementations of the sparse matrix–vector multiplication (SpMV) for&#xD;
FPGAs, considering both speed and energy consumption. In this work, we propose an extension of&#xD;
the RLM that enables optimizing runtime and energy consumption for NLA kernels based on sparse&#xD;
blocked storage formats on FPGAs. To test the power of this tool, we use it to extend our previous&#xD;
SpMV kernels by leveraging a block-sparse storage format that enables more efficient data access.
Descripción: Publicado en Micromachines con el título: Optimizing the performance of the Sparse Matrix–Vector Multiplication Kernel in FPGA guided by the Roofline Model.</summary>
    <dc:date>2023-01-01T00:00:00Z</dc:date>
  </entry>
</feed>

