Chronic Pain Estimation Through Deep Facial Descriptors Analysis

Descripción del Articulo

Worldwide, chronic pain has established as one of the foremost medical issues due to its 35% of comorbidity with depression and many other psychological problems. Traditionally, self-report (VAS scale) or physicist inspection (OPI scale) perform the pain assessment; nonetheless, both methods do not...

Descripción completa

Detalles Bibliográficos
Autores: Mauricio A., Peña J., Dianderas E., Mauricio L., Díaz J., Morán A.
Formato: artículo
Fecha de Publicación:2020
Institución:Consejo Nacional de Ciencia Tecnología e Innovación
Repositorio:CONCYTEC-Institucional
Lenguaje:inglés
OAI Identifier:oai:repositorio.concytec.gob.pe:20.500.12390/2652
Enlace del recurso:https://hdl.handle.net/20.500.12390/2652
https://doi.org/10.1007/978-3-030-46140-9_17
Nivel de acceso:acceso abierto
Materia:Pain recognition
CNN-RNN hybrid architecture
Deep facial representations
http://purl.org/pe-repo/ocde/ford#2.02.03
Descripción
Sumario:Worldwide, chronic pain has established as one of the foremost medical issues due to its 35% of comorbidity with depression and many other psychological problems. Traditionally, self-report (VAS scale) or physicist inspection (OPI scale) perform the pain assessment; nonetheless, both methods do not usually coincide [14]. Regarding self-assessment, several patients are not able to complete it objectively, like young children or patients with limited expression abilities. The lack of objectivity in the metrics draws the main problem of the clinical analysis of pain. In response, various efforts have tried concerning the inclusion of objective metrics, among which stand out the Prkachin and Solomon Pain Intensity (PSPI) metric defined by face appearance [5]. This work presents an in-depth learning approach to pain recognition considering deep facial representations and sequence analysis. Contrasting current state-of-the-art deep learning techniques, we correct rigid deformations caught since registration. A preprocessing stage is applied, which includes facial frontalization to untangle facial representations from non-affine transformations, perspective deformations, and outside noises passed since registration. After dealing with unbalanced data, we fine-tune a CNN from a pre-trained model to extract facial features, and then a multilayer RNN exploits temporal relation between video frames. As a result, we overcome state-of-the-art in terms of average accuracy at frames level (80.44%) and sequence level (84.54%) in the UNBC-McMaster Shoulder Pain Expression Archive Database.
Nota importante:
La información contenida en este registro es de entera responsabilidad de la institución que gestiona el repositorio institucional donde esta contenido este documento o set de datos. El CONCYTEC no se hace responsable por los contenidos (publicaciones y/o datos) accesibles a través del Repositorio Nacional Digital de Ciencia, Tecnología e Innovación de Acceso Abierto (ALICIA).