Reconocimiento de acciones cotidianas

Descripción del Articulo

The proposed method consists of three parts: features extraction, the use of bag of words and classification. For the first stage, we use the STIP descriptor for the intensity channel and HOG descriptor for the depth channel, MFCC and Spectrogram for the audio channel. In the next stage, it was used...

Descripción completa

Detalles Bibliográficos
Autor: Vizconde La Motta, Kelly
Formato: tesis de maestría
Fecha de Publicación:2016
Institución:Consejo Nacional de Ciencia Tecnología e Innovación
Repositorio:CONCYTEC-Institucional
Lenguaje:español
OAI Identifier:oai:repositorio.concytec.gob.pe:20.500.12390/2060
Enlace del recurso:https://hdl.handle.net/20.500.12390/2060
Nivel de acceso:acceso abierto
Materia:SVM
STIP
HOG
Spectogram
https://purl.org/pe-repo/ocde/ford#1.02.01
id CONC_551e9da2aebb7094248f856b4723d31e
oai_identifier_str oai:repositorio.concytec.gob.pe:20.500.12390/2060
network_acronym_str CONC
network_name_str CONCYTEC-Institucional
repository_id_str 4689
spelling Publicationrp05077600Vizconde La Motta, Kelly2024-05-30T23:13:38Z2024-05-30T23:13:38Z2016https://hdl.handle.net/20.500.12390/2060The proposed method consists of three parts: features extraction, the use of bag of words and classification. For the first stage, we use the STIP descriptor for the intensity channel and HOG descriptor for the depth channel, MFCC and Spectrogram for the audio channel. In the next stage, it was used the bag of words approach in each type of information separately. We use the K-means algorithm to generate the dictionary. Finally, a SVM classi fier labels the visual word histograms. For the experiments, we manually segmented the videos in clips containing a single action, achieving a recognition rate of 94.4% on Kitchen-UCSP dataset, our own dataset and a recognition rate of 88% on HMA videos.Consejo Nacional de Ciencia, Tecnología e Innovación Tecnológica - ConcytecspaUniversidad Católica San Pabloinfo:eu-repo/semantics/openAccesshttp://creativecommons.org/licenses/by-nc/4.0/SVMSTIP-1HOG-1HOG-1Spectogram-1https://purl.org/pe-repo/ocde/ford#1.02.01-1Reconocimiento de acciones cotidianasinfo:eu-repo/semantics/masterThesisreponame:CONCYTEC-Institucionalinstname:Consejo Nacional de Ciencia Tecnología e Innovacióninstacron:CONCYTEC#PLACEHOLDER_PARENT_METADATA_VALUE#20.500.12390/2060oai:repositorio.concytec.gob.pe:20.500.12390/20602024-05-30 15:41:44.406http://creativecommons.org/licenses/by-nc/4.0/info:eu-repo/semantics/openAccesshttp://purl.org/coar/access_right/c_14cbinfo:eu-repo/semantics/closedAccessmetadata only accesshttps://repositorio.concytec.gob.peRepositorio Institucional CONCYTECrepositorio@concytec.gob.pe#PLACEHOLDER_PARENT_METADATA_VALUE#<Publication xmlns="https://www.openaire.eu/cerif-profile/1.1/" id="5715fb80-5de8-4540-91f2-ee6936c3823e"> <Type xmlns="https://www.openaire.eu/cerif-profile/vocab/COAR_Publication_Types">http://purl.org/coar/resource_type/c_1843</Type> <Language>spa</Language> <Title>Reconocimiento de acciones cotidianas</Title> <PublishedIn> <Publication> </Publication> </PublishedIn> <PublicationDate>2016</PublicationDate> <Authors> <Author> <DisplayName>Vizconde La Motta, Kelly</DisplayName> <Person id="rp05077" /> <Affiliation> <OrgUnit> </OrgUnit> </Affiliation> </Author> </Authors> <Editors> </Editors> <Publishers> <Publisher> <DisplayName>Universidad Católica San Pablo</DisplayName> <OrgUnit /> </Publisher> </Publishers> <License>http://creativecommons.org/licenses/by-nc/4.0/</License> <Keyword>SVM</Keyword> <Keyword>STIP</Keyword> <Keyword>HOG</Keyword> <Keyword>HOG</Keyword> <Keyword>Spectogram</Keyword> <Abstract>The proposed method consists of three parts: features extraction, the use of bag of words and classification. For the first stage, we use the STIP descriptor for the intensity channel and HOG descriptor for the depth channel, MFCC and Spectrogram for the audio channel. In the next stage, it was used the bag of words approach in each type of information separately. We use the K-means algorithm to generate the dictionary. Finally, a SVM classi fier labels the visual word histograms. For the experiments, we manually segmented the videos in clips containing a single action, achieving a recognition rate of 94.4% on Kitchen-UCSP dataset, our own dataset and a recognition rate of 88% on HMA videos.</Abstract> <Access xmlns="http://purl.org/coar/access_right" > </Access> </Publication> -1
dc.title.none.fl_str_mv Reconocimiento de acciones cotidianas
title Reconocimiento de acciones cotidianas
spellingShingle Reconocimiento de acciones cotidianas
Vizconde La Motta, Kelly
SVM
STIP
HOG
HOG
Spectogram
https://purl.org/pe-repo/ocde/ford#1.02.01
title_short Reconocimiento de acciones cotidianas
title_full Reconocimiento de acciones cotidianas
title_fullStr Reconocimiento de acciones cotidianas
title_full_unstemmed Reconocimiento de acciones cotidianas
title_sort Reconocimiento de acciones cotidianas
author Vizconde La Motta, Kelly
author_facet Vizconde La Motta, Kelly
author_role author
dc.contributor.author.fl_str_mv Vizconde La Motta, Kelly
dc.subject.none.fl_str_mv SVM
topic SVM
STIP
HOG
HOG
Spectogram
https://purl.org/pe-repo/ocde/ford#1.02.01
dc.subject.es_PE.fl_str_mv STIP
HOG
HOG
Spectogram
dc.subject.ocde.none.fl_str_mv https://purl.org/pe-repo/ocde/ford#1.02.01
description The proposed method consists of three parts: features extraction, the use of bag of words and classification. For the first stage, we use the STIP descriptor for the intensity channel and HOG descriptor for the depth channel, MFCC and Spectrogram for the audio channel. In the next stage, it was used the bag of words approach in each type of information separately. We use the K-means algorithm to generate the dictionary. Finally, a SVM classi fier labels the visual word histograms. For the experiments, we manually segmented the videos in clips containing a single action, achieving a recognition rate of 94.4% on Kitchen-UCSP dataset, our own dataset and a recognition rate of 88% on HMA videos.
publishDate 2016
dc.date.accessioned.none.fl_str_mv 2024-05-30T23:13:38Z
dc.date.available.none.fl_str_mv 2024-05-30T23:13:38Z
dc.date.issued.fl_str_mv 2016
dc.type.none.fl_str_mv info:eu-repo/semantics/masterThesis
format masterThesis
dc.identifier.uri.none.fl_str_mv https://hdl.handle.net/20.500.12390/2060
url https://hdl.handle.net/20.500.12390/2060
dc.language.iso.none.fl_str_mv spa
language spa
dc.rights.none.fl_str_mv info:eu-repo/semantics/openAccess
dc.rights.uri.none.fl_str_mv http://creativecommons.org/licenses/by-nc/4.0/
eu_rights_str_mv openAccess
rights_invalid_str_mv http://creativecommons.org/licenses/by-nc/4.0/
dc.publisher.none.fl_str_mv Universidad Católica San Pablo
publisher.none.fl_str_mv Universidad Católica San Pablo
dc.source.none.fl_str_mv reponame:CONCYTEC-Institucional
instname:Consejo Nacional de Ciencia Tecnología e Innovación
instacron:CONCYTEC
instname_str Consejo Nacional de Ciencia Tecnología e Innovación
instacron_str CONCYTEC
institution CONCYTEC
reponame_str CONCYTEC-Institucional
collection CONCYTEC-Institucional
repository.name.fl_str_mv Repositorio Institucional CONCYTEC
repository.mail.fl_str_mv repositorio@concytec.gob.pe
_version_ 1839175383315906560
score 13.243791
Nota importante:
La información contenida en este registro es de entera responsabilidad de la institución que gestiona el repositorio institucional donde esta contenido este documento o set de datos. El CONCYTEC no se hace responsable por los contenidos (publicaciones y/o datos) accesibles a través del Repositorio Nacional Digital de Ciencia, Tecnología e Innovación de Acceso Abierto (ALICIA).