A Comparative Analysis on the Summarization of Legal Texts Using Transformer Models

Descripción del Articulo

Transformer models have evolved natural language processing tasks in machine learning and set a new standard for the state of the art. Thanks to the self-attention component, these models have achieved significant improvements in text generation tasks (such as extractive and abstractive text summari...

Descripción completa

Detalles Bibliográficos
Autores: Núñez-Robinson, Daniel, Talavera-Montalto, Jose, Ugarte, Willy
Formato: artículo
Fecha de Publicación:2022
Institución:Universidad Peruana de Ciencias Aplicadas
Repositorio:UPC-Institucional
Lenguaje:inglés
OAI Identifier:oai:repositorioacademico.upc.edu.pe:10757/669595
Enlace del recurso:http://hdl.handle.net/10757/669595
Nivel de acceso:acceso embargado
Materia:Abstractive text summarization
Benchmark
Deep learning
Natural language processing
Transformers
Descripción
Sumario:Transformer models have evolved natural language processing tasks in machine learning and set a new standard for the state of the art. Thanks to the self-attention component, these models have achieved significant improvements in text generation tasks (such as extractive and abstractive text summarization). However, research works involving text summarization and the legal domain are still in their infancy, and as such, benchmarks and a comparative analysis of these state of the art models is important for the future of text summarization of this highly specialized task. In order to contribute to these research works, the researchers propose a comparative analysis of different, fine-tuned Transformer models and datasets in order to provide a better understanding of the task at hand and the challenges ahead. The results show that Transformer models have improved upon the text summarization task, however, consistent and generalized learning is a challenge that still exists when training the models with large text dimensions. Finally, after analyzing the correlation between objective results and human opinion, the team concludes that the Recall-Oriented Understudy for Gisting Evaluation (ROUGE) [13] metrics used in the current state of the art are limited and do not reflect the precise quality of a generated summary.
Nota importante:
La información contenida en este registro es de entera responsabilidad de la institución que gestiona el repositorio institucional donde esta contenido este documento o set de datos. El CONCYTEC no se hace responsable por los contenidos (publicaciones y/o datos) accesibles a través del Repositorio Nacional Digital de Ciencia, Tecnología e Innovación de Acceso Abierto (ALICIA).