Selective keyframe summarisation for egocentric videos based on semantic concept search

Paria Yousefi, Ludmila Kuncheva

Research output: Contribution to conferencePaperpeer-review

259 Downloads (Pure)

Abstract

Large volumes of egocentric video data are being continually collected every day. While the standard video summarisation approach offers all-purpose summaries, here we propose a method for selective video summarisation. The user can query the video with an unlimited vocabulary of terms. The result is a time-tagged summary of keyframes related to the query concept. Our method uses a pre-trained Convolutional Neural Network (CNN) for the semantic search, and visualises the generated summary as a compass. Two commonly used datasets were chosen for the evaluation: UTEgo egocentric video and EDUB lifelog.
Original languageEnglish
Number of pages6
Publication statusPublished - 2018
EventThe International Image Processing Applications and Systems Conference - Sophia Antipolis, France
Duration: 12 Dec 201814 Dec 2018

Conference

ConferenceThe International Image Processing Applications and Systems Conference
Abbreviated titleIPAS
Country/TerritoryFrance
CitySophia Antipolis
Period12/12/1814/12/18

Fingerprint

Dive into the research topics of 'Selective keyframe summarisation for egocentric videos based on semantic concept search'. Together they form a unique fingerprint.

Cite this