Selective keyframe summarisation for egocentric videos based on semantic concept search
Research output: Contribution to conference › Paper › peer-review
Standard Standard
2018. Paper presented at The International Image Processing Applications and Systems Conference , Sophia Antipolis, France.
Research output: Contribution to conference › Paper › peer-review
HarvardHarvard
APA
CBE
MLA
VancouverVancouver
Author
RIS
TY - CONF
T1 - Selective keyframe summarisation for egocentric videos based on semantic concept search
AU - Yousefi, Paria
AU - Kuncheva, Ludmila
PY - 2018
Y1 - 2018
N2 - Large volumes of egocentric video data are being continually collected every day. While the standard video summarisation approach offers all-purpose summaries, here we propose a method for selective video summarisation. The user can query the video with an unlimited vocabulary of terms. The result is a time-tagged summary of keyframes related to the query concept. Our method uses a pre-trained Convolutional Neural Network (CNN) for the semantic search, and visualises the generated summary as a compass. Two commonly used datasets were chosen for the evaluation: UTEgo egocentric video and EDUB lifelog.
AB - Large volumes of egocentric video data are being continually collected every day. While the standard video summarisation approach offers all-purpose summaries, here we propose a method for selective video summarisation. The user can query the video with an unlimited vocabulary of terms. The result is a time-tagged summary of keyframes related to the query concept. Our method uses a pre-trained Convolutional Neural Network (CNN) for the semantic search, and visualises the generated summary as a compass. Two commonly used datasets were chosen for the evaluation: UTEgo egocentric video and EDUB lifelog.
M3 - Paper
T2 - The International Image Processing Applications and Systems Conference
Y2 - 12 December 2018 through 14 December 2018
ER -