Pedestrian identification through Deep Learning with classical and modern architecture of Convolutional Neural Networks
Descripción del Articulo
This article, refers to the research carried out at the National University Micaela Bastidas (UNAMBA), whose specific objectives were: To determine in a first stage of learning the proportion of accuracy of a classical architecture of Convolutionary Neural Network (CNN) in the identification of UNAM...
| Autores: | , , |
|---|---|
| Formato: | artículo |
| Fecha de Publicación: | 2020 |
| Institución: | Universidad Nacional Micaela Bastidas de Apurímac |
| Repositorio: | UNAMBA-Institucional |
| Lenguaje: | español |
| OAI Identifier: | oai:172.16.0.151:UNAMBA/940 |
| Enlace del recurso: | http://repositorio.unamba.edu.pe/handle/UNAMBA/940 |
| Nivel de acceso: | acceso abierto |
| Materia: | Recognition of people Convolutional neural network Deep learning https://purl.org/pe-repo/ocde/ford#2.02.03 |
| id |
UNMB_8c0deb212fdd65c8551b64101c89728e |
|---|---|
| oai_identifier_str |
oai:172.16.0.151:UNAMBA/940 |
| network_acronym_str |
UNMB |
| network_name_str |
UNAMBA-Institucional |
| repository_id_str |
. |
| dc.title.es_PE.fl_str_mv |
Pedestrian identification through Deep Learning with classical and modern architecture of Convolutional Neural Networks |
| title |
Pedestrian identification through Deep Learning with classical and modern architecture of Convolutional Neural Networks |
| spellingShingle |
Pedestrian identification through Deep Learning with classical and modern architecture of Convolutional Neural Networks Ordoñes Ramos, Erech Recognition of people Convolutional neural network Deep learning https://purl.org/pe-repo/ocde/ford#2.02.03 |
| title_short |
Pedestrian identification through Deep Learning with classical and modern architecture of Convolutional Neural Networks |
| title_full |
Pedestrian identification through Deep Learning with classical and modern architecture of Convolutional Neural Networks |
| title_fullStr |
Pedestrian identification through Deep Learning with classical and modern architecture of Convolutional Neural Networks |
| title_full_unstemmed |
Pedestrian identification through Deep Learning with classical and modern architecture of Convolutional Neural Networks |
| title_sort |
Pedestrian identification through Deep Learning with classical and modern architecture of Convolutional Neural Networks |
| author |
Ordoñes Ramos, Erech |
| author_facet |
Ordoñes Ramos, Erech Mamani Vilca, Ecler Mamani Coaquira, Yonatan |
| author_role |
author |
| author2 |
Mamani Vilca, Ecler Mamani Coaquira, Yonatan |
| author2_role |
author author |
| dc.contributor.author.fl_str_mv |
Ordoñes Ramos, Erech Mamani Vilca, Ecler Mamani Coaquira, Yonatan |
| dc.subject.es_PE.fl_str_mv |
Recognition of people Convolutional neural network |
| topic |
Recognition of people Convolutional neural network Deep learning https://purl.org/pe-repo/ocde/ford#2.02.03 |
| dc.subject.none.fl_str_mv |
Deep learning |
| dc.subject.ocde.es_PE.fl_str_mv |
https://purl.org/pe-repo/ocde/ford#2.02.03 |
| description |
This article, refers to the research carried out at the National University Micaela Bastidas (UNAMBA), whose specific objectives were: To determine in a first stage of learning the proportion of accuracy of a classical architecture of Convolutionary Neural Network (CNN) in the identification of UNAMBA peoples, to determine in a second stage the proportion of precision in a modern architecture of RNC and finally compare the first stage with the second, to find the highest proportion. The training was given with a quantity of 242 people. Therefore, 27,996 images had to be generated through the technique of Video Scraping and data augmentation, which were divided into 19,700 images for training and 8,296 for the validation. Regarding the results in the first stage, a modified model VGG16-UNAMBA is proposed, with which a ratio of 0.9721 accuracy was achieved; while in the second stage it is proposed to DenseNet121-UNAMBA, with which a proportion of 0.9943 accuracy was achieved. Coming to the conclusion that the use of deep learning allows UNAMBA staff to be identified in a high proportion of accuracy. |
| publishDate |
2020 |
| dc.date.accessioned.none.fl_str_mv |
2021-05-14T21:19:40Z |
| dc.date.available.none.fl_str_mv |
2021-05-14T21:19:40Z |
| dc.date.issued.fl_str_mv |
2020-12-12 |
| dc.type.es_PE.fl_str_mv |
info:eu-repo/semantics/article |
| format |
article |
| dc.identifier.citation.es_PE.fl_str_mv |
IEEE |
| dc.identifier.issn.none.fl_str_mv |
2706-543X |
| dc.identifier.uri.none.fl_str_mv |
http://repositorio.unamba.edu.pe/handle/UNAMBA/940 |
| dc.identifier.journal.es_PE.fl_str_mv |
C&T Riqchary |
| identifier_str_mv |
IEEE 2706-543X C&T Riqchary |
| url |
http://repositorio.unamba.edu.pe/handle/UNAMBA/940 |
| dc.language.iso.es_PE.fl_str_mv |
spa |
| language |
spa |
| dc.rights.es_PE.fl_str_mv |
info:eu-repo/semantics/openAccess |
| dc.rights.uri.*.fl_str_mv |
http://creativecommons.org/licenses/by-nc-sa/3.0/us/ |
| eu_rights_str_mv |
openAccess |
| rights_invalid_str_mv |
http://creativecommons.org/licenses/by-nc-sa/3.0/us/ |
| dc.format.es_PE.fl_str_mv |
application/pdf |
| dc.publisher.es_PE.fl_str_mv |
Universidad Nacional Micaela Bastidas de Apurímac |
| dc.source.es_PE.fl_str_mv |
Universidad Nacional Micaela Bastidas de Apurímac Repositorio Institucional - UNAMBA |
| dc.source.none.fl_str_mv |
reponame:UNAMBA-Institucional instname:Universidad Nacional Micaela Bastidas de Apurímac instacron:UNAMBA |
| instname_str |
Universidad Nacional Micaela Bastidas de Apurímac |
| instacron_str |
UNAMBA |
| institution |
UNAMBA |
| reponame_str |
UNAMBA-Institucional |
| collection |
UNAMBA-Institucional |
| bitstream.url.fl_str_mv |
http://172.16.0.151/bitstream/UNAMBA/940/4/6-10.pdf.txt http://172.16.0.151/bitstream/UNAMBA/940/3/license.txt http://172.16.0.151/bitstream/UNAMBA/940/2/license_rdf http://172.16.0.151/bitstream/UNAMBA/940/1/6-10.pdf |
| bitstream.checksum.fl_str_mv |
dc0ba338989d4e89429af738fc693368 c52066b9c50a8f86be96c82978636682 df76b173e7954a20718100d078b240a8 18563ea2d98383e2133fec0d729cf946 |
| bitstream.checksumAlgorithm.fl_str_mv |
MD5 MD5 MD5 MD5 |
| repository.name.fl_str_mv |
DSpace |
| repository.mail.fl_str_mv |
athos2777@gmail.com |
| _version_ |
1814271631239938048 |
| spelling |
Ordoñes Ramos, ErechMamani Vilca, EclerMamani Coaquira, Yonatan2021-05-14T21:19:40Z2021-05-14T21:19:40Z2020-12-12IEEE2706-543Xhttp://repositorio.unamba.edu.pe/handle/UNAMBA/940C&T RiqcharyThis article, refers to the research carried out at the National University Micaela Bastidas (UNAMBA), whose specific objectives were: To determine in a first stage of learning the proportion of accuracy of a classical architecture of Convolutionary Neural Network (CNN) in the identification of UNAMBA peoples, to determine in a second stage the proportion of precision in a modern architecture of RNC and finally compare the first stage with the second, to find the highest proportion. The training was given with a quantity of 242 people. Therefore, 27,996 images had to be generated through the technique of Video Scraping and data augmentation, which were divided into 19,700 images for training and 8,296 for the validation. Regarding the results in the first stage, a modified model VGG16-UNAMBA is proposed, with which a ratio of 0.9721 accuracy was achieved; while in the second stage it is proposed to DenseNet121-UNAMBA, with which a proportion of 0.9943 accuracy was achieved. Coming to the conclusion that the use of deep learning allows UNAMBA staff to be identified in a high proportion of accuracy.Submitted by Ecler Mamani (eclervirtual@gmail.com) on 2021-05-14T21:19:39Z No. of bitstreams: 2 license_rdf: 1536 bytes, checksum: df76b173e7954a20718100d078b240a8 (MD5) 6-10.pdf: 6360166 bytes, checksum: 18563ea2d98383e2133fec0d729cf946 (MD5)Made available in DSpace on 2021-05-14T21:19:40Z (GMT). No. of bitstreams: 2 license_rdf: 1536 bytes, checksum: df76b173e7954a20718100d078b240a8 (MD5) 6-10.pdf: 6360166 bytes, checksum: 18563ea2d98383e2133fec0d729cf946 (MD5) Previous issue date: 2020-12-12Paresapplication/pdfspaUniversidad Nacional Micaela Bastidas de Apurímacinfo:eu-repo/semantics/openAccesshttp://creativecommons.org/licenses/by-nc-sa/3.0/us/Universidad Nacional Micaela Bastidas de ApurímacRepositorio Institucional - UNAMBAreponame:UNAMBA-Institucionalinstname:Universidad Nacional Micaela Bastidas de Apurímacinstacron:UNAMBARecognition of peopleConvolutional neural networkDeep learninghttps://purl.org/pe-repo/ocde/ford#2.02.03Pedestrian identification through Deep Learning with classical and modern architecture of Convolutional Neural Networksinfo:eu-repo/semantics/articleTEXT6-10.pdf.txt6-10.pdf.txtExtracted texttext/plain25144http://172.16.0.151/bitstream/UNAMBA/940/4/6-10.pdf.txtdc0ba338989d4e89429af738fc693368MD54LICENSElicense.txtlicense.txttext/plain; charset=utf-81327http://172.16.0.151/bitstream/UNAMBA/940/3/license.txtc52066b9c50a8f86be96c82978636682MD53CC-LICENSElicense_rdflicense_rdfapplication/rdf+xml; charset=utf-81536http://172.16.0.151/bitstream/UNAMBA/940/2/license_rdfdf76b173e7954a20718100d078b240a8MD52ORIGINAL6-10.pdf6-10.pdfTexto completoapplication/pdf6360166http://172.16.0.151/bitstream/UNAMBA/940/1/6-10.pdf18563ea2d98383e2133fec0d729cf946MD51UNAMBA/940oai:172.16.0.151:UNAMBA/9402024-10-17 11:41:04.08DSpaceathos2777@gmail.com77u/TGljZW5jaWEgZGUgVXNvCiAKRWwgUmVwb3NpdG9yaW8gSW5zdGl0dWNpb25hbCwgZGlmdW5kZSBtZWRpYW50ZSBsb3MgdHJhYmFqb3MgZGUgaW52ZXN0aWdhY2nDs24gcHJvZHVjaWRvcyBwb3IgbG9zIG1pZW1icm9zIGRlIGxhIHVuaXZlcnNpZGFkLiBFbCBjb250ZW5pZG8gZGUgbG9zIGRvY3VtZW50b3MgZGlnaXRhbGVzIGVzIGRlIGFjY2VzbyBhYmllcnRvIHBhcmEgdG9kYSBwZXJzb25hIGludGVyZXNhZGEuCgpTZSBhY2VwdGEgbGEgZGlmdXNpw7NuIHDDumJsaWNhIGRlIGxhIG9icmEsIHN1IGNvcGlhIHkgZGlzdHJpYnVjacOzbi4gUGFyYSBlc3RvIGVzIG5lY2VzYXJpbyBxdWUgc2UgY3VtcGxhIGNvbiBsYXMgc2lndWllbnRlcyBjb25kaWNpb25lczoKCkVsIG5lY2VzYXJpbyByZWNvbm9jaW1pZW50byBkZSBsYSBhdXRvcsOtYSBkZSBsYSBvYnJhLCBpZGVudGlmaWNhbmRvIG9wb3J0dW5hIHkgY29ycmVjdGFtZW50ZSBhIGxhIHBlcnNvbmEgcXVlIHBvc2VhIGxvcyBkZXJlY2hvcyBkZSBhdXRvci4KCk5vIGVzdMOhIHBlcm1pdGlkbyBlbCB1c28gaW5kZWJpZG8gZGVsIHRyYWJham8gZGUgaW52ZXN0aWdhY2nDs24gY29uIGZpbmVzIGRlIGx1Y3JvIG8gY3VhbHF1aWVyIHRpcG8gZGUgYWN0aXZpZGFkIHF1ZSBwcm9kdXpjYSBnYW5hbmNpYXMgYSBsYXMgcGVyc29uYXMgcXVlIGxvIGRpZnVuZGVuIHNpbiBlbCBjb25zZW50aW1pZW50byBkZWwgYXV0b3IgKGF1dG9yIGxlZ2FsKS4KCkxvcyBkZXJlY2hvcyBtb3JhbGVzIGRlbCBhdXRvciBubyBzb24gYWZlY3RhZG9zIHBvciBsYSBwcmVzZW50ZSBsaWNlbmNpYSBkZSB1c28uCgpEZXJlY2hvcyBkZSBhdXRvcgoKTGEgdW5pdmVyc2lkYWQgbm8gcG9zZWUgbG9zIGRlcmVjaG9zIGRlIHByb3BpZWRhZCBpbnRlbGVjdHVhbC4gTG9zIGRlcmVjaG9zIGRlIGF1dG9yIHNlIGVuY3VlbnRyYW4gcHJvdGVnaWRvcyBwb3IgbGEgbGVnaXNsYWNpw7NuIHBlcnVhbmE6IExleSBzb2JyZSBlbCBEZXJlY2hvIGRlIEF1dG9yIHByb211bGdhZG8gZW4gMTk5NiAoRC5MLiBOwrA4MjIpLCBMZXkgcXVlIG1vZGlmaWNhIGxvcyBhcnTDrWN1bG9zIDE4OMKwIHkgMTg5wrAgZGVsIGRlY3JldG8gbGVnaXNsYXRpdm8gTsKwODIyLCBMZXkgc29icmUgZGVyZWNob3MgZGUgYXV0b3IgcHJvbXVsZ2FkbyBlbiAyMDA1IChMZXkgTsKwMjg1MTcpLCBEZWNyZXRvIExlZ2lzbGF0aXZvIHF1ZSBhcHJ1ZWJhIGxhIG1vZGlmaWNhY2nDs24gZGVsIERlY3JldG8gTGVnaXNsYXRpdm8gTsKwODIyLCBMZXkgc29icmUgZWwgRGVyZWNobyBkZSBBdXRvciBwcm9tdWxnYWRvIGVuIDIwMDggKEQuTC4gTsKwMTA3NikuCg== |
| score |
13.955691 |
Nota importante:
La información contenida en este registro es de entera responsabilidad de la institución que gestiona el repositorio institucional donde esta contenido este documento o set de datos. El CONCYTEC no se hace responsable por los contenidos (publicaciones y/o datos) accesibles a través del Repositorio Nacional Digital de Ciencia, Tecnología e Innovación de Acceso Abierto (ALICIA).
La información contenida en este registro es de entera responsabilidad de la institución que gestiona el repositorio institucional donde esta contenido este documento o set de datos. El CONCYTEC no se hace responsable por los contenidos (publicaciones y/o datos) accesibles a través del Repositorio Nacional Digital de Ciencia, Tecnología e Innovación de Acceso Abierto (ALICIA).