Text prediction recurrent neural networks using long shortterm memory-dropout
Descripción del Articulo
Unit short-term memory (LSTM) is a type of recurrent neural network (RNN) whose sequence-based models are being used in text generation and/or prediction tasks, question answering, and classification systems due to their ability to learn long-term dependencies. The present research integrates the LS...
Autores: | , , , , , , |
---|---|
Formato: | artículo |
Fecha de Publicación: | 2023 |
Institución: | Universidad Autónoma del Perú |
Repositorio: | AUTONOMA-Institucional |
Lenguaje: | inglés |
OAI Identifier: | oai:repositorio.autonoma.edu.pe:20.500.13067/2830 |
Enlace del recurso: | https://hdl.handle.net/20.500.13067/2830 https://doi.org/10.11591/ijeecs.v29.i3.pp1758-1768 |
Nivel de acceso: | acceso abierto |
Materia: | Dropout Prediction Recurrent neural network Text Unit short-term memory https://purl.org/pe-repo/ocde/ford#2.02.04 |
id |
AUTO_399b4b8036dab0696223dff81b2d5fb8 |
---|---|
oai_identifier_str |
oai:repositorio.autonoma.edu.pe:20.500.13067/2830 |
network_acronym_str |
AUTO |
network_name_str |
AUTONOMA-Institucional |
repository_id_str |
4774 |
spelling |
Iparraguirre-Villanueva, OrlandoGuevara-Ponce, VictorRuiz-Alvarado, DanielBeltozar-Clemente, SaulSierra-Liñan, FernandoZapata-Paulini, JoselynCabanillas-Carbonell, Michael2023-11-30T16:15:15Z2023-11-30T16:15:15Z2023https://hdl.handle.net/20.500.13067/2830https://doi.org/10.11591/ijeecs.v29.i3.pp1758-1768Unit short-term memory (LSTM) is a type of recurrent neural network (RNN) whose sequence-based models are being used in text generation and/or prediction tasks, question answering, and classification systems due to their ability to learn long-term dependencies. The present research integrates the LSTM network and dropout technique to generate a text from a corpus as input, a model is developed to find the best way to extract the words from the context. For training the model, the poem "La Ciudad y los perros" which is composed of 128,600 words is used as input data. The poem was divided into two data sets, 38.88% for training and the remaining 61.12% for testing the model. The proposed model was tested in two variants: word importance and context. The results were evaluated in terms of the semantic proximity of the generated text to the given context.application/pdfengIndonesian Journal of Electrical Engineering and Computer Scienceinfo:eu-repo/semantics/openAccesshttps://creativecommons.org/licenses/by/4.0/DropoutPredictionRecurrent neural networkTextUnit short-term memoryhttps://purl.org/pe-repo/ocde/ford#2.02.04Text prediction recurrent neural networks using long shortterm memory-dropoutinfo:eu-repo/semantics/article29317581768reponame:AUTONOMA-Institucionalinstname:Universidad Autónoma del Perúinstacron:AUTONOMAORIGINAL8_2023.pdf8_2023.pdfArtículoapplication/pdf693028http://repositorio.autonoma.edu.pe/bitstream/20.500.13067/2830/1/8_2023.pdf9a424f6eaf61160e059f05aa327d601cMD51LICENSElicense.txtlicense.txttext/plain; charset=utf-885http://repositorio.autonoma.edu.pe/bitstream/20.500.13067/2830/2/license.txt9243398ff393db1861c890baeaeee5f9MD52TEXT8_2023.pdf.txt8_2023.pdf.txtExtracted texttext/plain44915http://repositorio.autonoma.edu.pe/bitstream/20.500.13067/2830/3/8_2023.pdf.txt9b711ee79f71bcf2168da4c64ebd15deMD53THUMBNAIL8_2023.pdf.jpg8_2023.pdf.jpgGenerated Thumbnailimage/jpeg6549http://repositorio.autonoma.edu.pe/bitstream/20.500.13067/2830/4/8_2023.pdf.jpg3b9d8e847ace2613b5b71366785ab061MD5420.500.13067/2830oai:repositorio.autonoma.edu.pe:20.500.13067/28302023-12-01 03:00:34.703Repositorio de la Universidad Autonoma del Perúrepositorio@autonoma.peVG9kb3MgbG9zIGRlcmVjaG9zIHJlc2VydmFkb3MgcG9yOg0KVU5JVkVSU0lEQUQgQVVUw5NOT01BIERFTCBQRVLDmg0KQ1JFQVRJVkUgQ09NTU9OUw== |
dc.title.es_PE.fl_str_mv |
Text prediction recurrent neural networks using long shortterm memory-dropout |
title |
Text prediction recurrent neural networks using long shortterm memory-dropout |
spellingShingle |
Text prediction recurrent neural networks using long shortterm memory-dropout Iparraguirre-Villanueva, Orlando Dropout Prediction Recurrent neural network Text Unit short-term memory https://purl.org/pe-repo/ocde/ford#2.02.04 |
title_short |
Text prediction recurrent neural networks using long shortterm memory-dropout |
title_full |
Text prediction recurrent neural networks using long shortterm memory-dropout |
title_fullStr |
Text prediction recurrent neural networks using long shortterm memory-dropout |
title_full_unstemmed |
Text prediction recurrent neural networks using long shortterm memory-dropout |
title_sort |
Text prediction recurrent neural networks using long shortterm memory-dropout |
author |
Iparraguirre-Villanueva, Orlando |
author_facet |
Iparraguirre-Villanueva, Orlando Guevara-Ponce, Victor Ruiz-Alvarado, Daniel Beltozar-Clemente, Saul Sierra-Liñan, Fernando Zapata-Paulini, Joselyn Cabanillas-Carbonell, Michael |
author_role |
author |
author2 |
Guevara-Ponce, Victor Ruiz-Alvarado, Daniel Beltozar-Clemente, Saul Sierra-Liñan, Fernando Zapata-Paulini, Joselyn Cabanillas-Carbonell, Michael |
author2_role |
author author author author author author |
dc.contributor.author.fl_str_mv |
Iparraguirre-Villanueva, Orlando Guevara-Ponce, Victor Ruiz-Alvarado, Daniel Beltozar-Clemente, Saul Sierra-Liñan, Fernando Zapata-Paulini, Joselyn Cabanillas-Carbonell, Michael |
dc.subject.es_PE.fl_str_mv |
Dropout Prediction Recurrent neural network Text Unit short-term memory |
topic |
Dropout Prediction Recurrent neural network Text Unit short-term memory https://purl.org/pe-repo/ocde/ford#2.02.04 |
dc.subject.ocde.es_PE.fl_str_mv |
https://purl.org/pe-repo/ocde/ford#2.02.04 |
description |
Unit short-term memory (LSTM) is a type of recurrent neural network (RNN) whose sequence-based models are being used in text generation and/or prediction tasks, question answering, and classification systems due to their ability to learn long-term dependencies. The present research integrates the LSTM network and dropout technique to generate a text from a corpus as input, a model is developed to find the best way to extract the words from the context. For training the model, the poem "La Ciudad y los perros" which is composed of 128,600 words is used as input data. The poem was divided into two data sets, 38.88% for training and the remaining 61.12% for testing the model. The proposed model was tested in two variants: word importance and context. The results were evaluated in terms of the semantic proximity of the generated text to the given context. |
publishDate |
2023 |
dc.date.accessioned.none.fl_str_mv |
2023-11-30T16:15:15Z |
dc.date.available.none.fl_str_mv |
2023-11-30T16:15:15Z |
dc.date.issued.fl_str_mv |
2023 |
dc.type.es_PE.fl_str_mv |
info:eu-repo/semantics/article |
format |
article |
dc.identifier.uri.none.fl_str_mv |
https://hdl.handle.net/20.500.13067/2830 |
dc.identifier.doi.none.fl_str_mv |
https://doi.org/10.11591/ijeecs.v29.i3.pp1758-1768 |
url |
https://hdl.handle.net/20.500.13067/2830 https://doi.org/10.11591/ijeecs.v29.i3.pp1758-1768 |
dc.language.iso.es_PE.fl_str_mv |
eng |
language |
eng |
dc.rights.es_PE.fl_str_mv |
info:eu-repo/semantics/openAccess |
dc.rights.uri.es_PE.fl_str_mv |
https://creativecommons.org/licenses/by/4.0/ |
eu_rights_str_mv |
openAccess |
rights_invalid_str_mv |
https://creativecommons.org/licenses/by/4.0/ |
dc.format.es_PE.fl_str_mv |
application/pdf |
dc.publisher.es_PE.fl_str_mv |
Indonesian Journal of Electrical Engineering and Computer Science |
dc.source.none.fl_str_mv |
reponame:AUTONOMA-Institucional instname:Universidad Autónoma del Perú instacron:AUTONOMA |
instname_str |
Universidad Autónoma del Perú |
instacron_str |
AUTONOMA |
institution |
AUTONOMA |
reponame_str |
AUTONOMA-Institucional |
collection |
AUTONOMA-Institucional |
dc.source.volume.es_PE.fl_str_mv |
29 |
dc.source.issue.es_PE.fl_str_mv |
3 |
dc.source.beginpage.es_PE.fl_str_mv |
1758 |
dc.source.endpage.es_PE.fl_str_mv |
1768 |
bitstream.url.fl_str_mv |
http://repositorio.autonoma.edu.pe/bitstream/20.500.13067/2830/1/8_2023.pdf http://repositorio.autonoma.edu.pe/bitstream/20.500.13067/2830/2/license.txt http://repositorio.autonoma.edu.pe/bitstream/20.500.13067/2830/3/8_2023.pdf.txt http://repositorio.autonoma.edu.pe/bitstream/20.500.13067/2830/4/8_2023.pdf.jpg |
bitstream.checksum.fl_str_mv |
9a424f6eaf61160e059f05aa327d601c 9243398ff393db1861c890baeaeee5f9 9b711ee79f71bcf2168da4c64ebd15de 3b9d8e847ace2613b5b71366785ab061 |
bitstream.checksumAlgorithm.fl_str_mv |
MD5 MD5 MD5 MD5 |
repository.name.fl_str_mv |
Repositorio de la Universidad Autonoma del Perú |
repository.mail.fl_str_mv |
repositorio@autonoma.pe |
_version_ |
1835915338073505792 |
score |
13.958958 |
Nota importante:
La información contenida en este registro es de entera responsabilidad de la institución que gestiona el repositorio institucional donde esta contenido este documento o set de datos. El CONCYTEC no se hace responsable por los contenidos (publicaciones y/o datos) accesibles a través del Repositorio Nacional Digital de Ciencia, Tecnología e Innovación de Acceso Abierto (ALICIA).
La información contenida en este registro es de entera responsabilidad de la institución que gestiona el repositorio institucional donde esta contenido este documento o set de datos. El CONCYTEC no se hace responsable por los contenidos (publicaciones y/o datos) accesibles a través del Repositorio Nacional Digital de Ciencia, Tecnología e Innovación de Acceso Abierto (ALICIA).