Recognising specific foods in MRI scans using CNN and visualisation

Allbwn ymchwil: Pennod mewn Llyfr/Adroddiad/Trafodion CynhadleddCyfraniad i Gynhadleddadolygiad gan gymheiriaid

Fersiynau electronig

Dogfennau

  • camera ready version

    Llawysgrif awdur wedi’i dderbyn, 1.51 MB, dogfen-PDF

Dangosydd eitem ddigidol (DOI)

This work is part of an experimental project aiming at understanding the kinetics of human gastric emptying. For this purpose magnetic resonance imaging (MRI) images of the stomach of healthy volunteers have been acquired using a state-of-art scanner with an adapted protocol. The challenge is to follow the stomach content (food) in the data. Frozen garden peas and petits pois have been chosen as experimental proof-of-concept as their shapes are well defined and are not altered in the early stages of digestion. The food recognition is performed as a binary classification implemented using a deep convolutional neural network (CNN). Input hyperparameters, here image size and number of epochs, were exhaustively evaluated to identify the combination of parameters that produces the best classification. The results have been analysed using interactive visualisation. We prove in this paper that advances in computer vision and machine learning can be deployed to automatically label the content of the stomach even when the amount of training data is low and the data imbalanced.Interactive visualisation helps identify the most effective combinations of hyperparameters to maximise accuracy, precision,recall and F1score, leaving the end-user evaluate the possible trade-off between these metrics. Food recognition in MRI scans through neural network produced an accuracy of 0.97, precision of 0.91, recall of 0.86 and F1score of 0.89, all close to 1.
Iaith wreiddiolSaesneg
TeitlProceedings of Computer Graphics & Visual Computing 2020 (CGVC 2020)
CyhoeddwrThe Eurographics Association
Dynodwyr Gwrthrych Digidol (DOIs)
StatwsCyhoeddwyd - 2020
Gweld graff cysylltiadau