Algoritmo de diagnóstico de tuberculosis mediante detección de colonias en videos de cultivos MODS utilizando segmentación Max-Tree y redes neuronales MCNN
Descripción del Articulo
The following work proposes an automatic algorithm for classifying videos of MODS tuberculosis samples. The video processing stage applies edge enhancement with the Phase Stretch Transform technique, after which it uses the max-tree method to spatiotemporally segment and track the objects of interes...
Autor: | |
---|---|
Formato: | tesis de maestría |
Fecha de Publicación: | 2019 |
Institución: | Consejo Nacional de Ciencia Tecnología e Innovación |
Repositorio: | CONCYTEC-Institucional |
Lenguaje: | español |
OAI Identifier: | oai:repositorio.concytec.gob.pe:20.500.12390/1486 |
Enlace del recurso: | https://hdl.handle.net/20.500.12390/1486 |
Nivel de acceso: | acceso abierto |
Materia: | Segmentación Max-Tree Redes neuronales Algoritmo de diagnóstico https://purl.org/pe-repo/ocde/ford#2.02.01 |
Sumario: | The following work proposes an automatic algorithm for classifying videos of MODS tuberculosis samples. The video processing stage applies edge enhancement with the Phase Stretch Transform technique, after which it uses the max-tree method to spatiotemporally segment and track the objects of interest in each day of the video. The individual object classification stage uses dynamic shape attributes of each object to train classic classifiers (Gaussian Naïve Bayes, Support Vector Machine and Gaussian Process Classifier), and a multiscale convolutional neural network (MCNN). Training and evaluation of these individual object classifiers used objects from days as early as 3, up to days as late as 11. The conclusion points that the best classical classifier is based on an SVM, and the best overall classifier is the one with MCNN. The SVM classifier has a precision of 59%, a sensibility of 58% and a specificity of 75%. The MCNN classifier outperforms the SVM in all metrics by more than 20%, except on specificity where SVM is better by 4%: MCNN has a precision of 83%, a sensibility of 83% and a specificity of 71%. In the video classification stage, the results from the object classifiers served as input to build video classifiers. The computation of the evaluation metrics for MCNN-based video classifier only considered days 3, 4 and 5 for each available video. In these time periods, the MCNN obtained a precision of 81%, sensibility of 72% and specificity of 50%. Although none of these metrics achieve a value of 90%, it is important to mention that early day colonies (days 3, 4 and 5) are very similar to detritus or residuals or other non-colony objects in MODS cultures. Besides, samples always have many more non-colonies than colonies; this is, they have a high level of distracting elements. As a consequence, the adequate classification of early day MODS objects is very challenging, and the results are useful as a first-level filter of MODS samples, which allows technicians to focus first on samples with a higher chance of being positive, without discarding the rest. |
---|
Nota importante:
La información contenida en este registro es de entera responsabilidad de la institución que gestiona el repositorio institucional donde esta contenido este documento o set de datos. El CONCYTEC no se hace responsable por los contenidos (publicaciones y/o datos) accesibles a través del Repositorio Nacional Digital de Ciencia, Tecnología e Innovación de Acceso Abierto (ALICIA).
La información contenida en este registro es de entera responsabilidad de la institución que gestiona el repositorio institucional donde esta contenido este documento o set de datos. El CONCYTEC no se hace responsable por los contenidos (publicaciones y/o datos) accesibles a través del Repositorio Nacional Digital de Ciencia, Tecnología e Innovación de Acceso Abierto (ALICIA).