Recognising specific foods in MRI scans using CNN and visualisation
Allbwn ymchwil: Pennod mewn Llyfr/Adroddiad/Trafodion Cynhadledd › Cyfraniad i Gynhadledd › adolygiad gan gymheiriaid
StandardStandard
Proceedings of Computer Graphics & Visual Computing 2020 (CGVC 2020). The Eurographics Association, 2020.
Allbwn ymchwil: Pennod mewn Llyfr/Adroddiad/Trafodion Cynhadledd › Cyfraniad i Gynhadledd › adolygiad gan gymheiriaid
HarvardHarvard
APA
CBE
MLA
VancouverVancouver
Author
RIS
TY - GEN
T1 - Recognising specific foods in MRI scans using CNN and visualisation
AU - Gardner, Joshua
AU - Al-Maliki, Shatha
AU - Lutton, Evelyne
AU - Boue, Francois
AU - Vidal, Franck
PY - 2020
Y1 - 2020
N2 - This work is part of an experimental project aiming at understanding the kinetics of human gastric emptying. For this purpose magnetic resonance imaging (MRI) images of the stomach of healthy volunteers have been acquired using a state-of-art scanner with an adapted protocol. The challenge is to follow the stomach content (food) in the data. Frozen garden peas and petits pois have been chosen as experimental proof-of-concept as their shapes are well defined and are not altered in the early stages of digestion. The food recognition is performed as a binary classification implemented using a deep convolutional neural network (CNN). Input hyperparameters, here image size and number of epochs, were exhaustively evaluated to identify the combination of parameters that produces the best classification. The results have been analysed using interactive visualisation. We prove in this paper that advances in computer vision and machine learning can be deployed to automatically label the content of the stomach even when the amount of training data is low and the data imbalanced.Interactive visualisation helps identify the most effective combinations of hyperparameters to maximise accuracy, precision,recall and F1score, leaving the end-user evaluate the possible trade-off between these metrics. Food recognition in MRI scans through neural network produced an accuracy of 0.97, precision of 0.91, recall of 0.86 and F1score of 0.89, all close to 1.
AB - This work is part of an experimental project aiming at understanding the kinetics of human gastric emptying. For this purpose magnetic resonance imaging (MRI) images of the stomach of healthy volunteers have been acquired using a state-of-art scanner with an adapted protocol. The challenge is to follow the stomach content (food) in the data. Frozen garden peas and petits pois have been chosen as experimental proof-of-concept as their shapes are well defined and are not altered in the early stages of digestion. The food recognition is performed as a binary classification implemented using a deep convolutional neural network (CNN). Input hyperparameters, here image size and number of epochs, were exhaustively evaluated to identify the combination of parameters that produces the best classification. The results have been analysed using interactive visualisation. We prove in this paper that advances in computer vision and machine learning can be deployed to automatically label the content of the stomach even when the amount of training data is low and the data imbalanced.Interactive visualisation helps identify the most effective combinations of hyperparameters to maximise accuracy, precision,recall and F1score, leaving the end-user evaluate the possible trade-off between these metrics. Food recognition in MRI scans through neural network produced an accuracy of 0.97, precision of 0.91, recall of 0.86 and F1score of 0.89, all close to 1.
U2 - 10.2312/cgvc.20201145
DO - 10.2312/cgvc.20201145
M3 - Conference contribution
BT - Proceedings of Computer Graphics & Visual Computing 2020 (CGVC 2020)
PB - The Eurographics Association
ER -