Abstract
Large volumes of egocentric video data are being continually collected every day. While the standard video summarisation approach offers all-purpose summaries, here we propose a method for selective video summarisation. The user can query the video with an unlimited vocabulary of terms. The result is a time-tagged summary of keyframes related to the query concept. Our method uses a pre-trained Convolutional Neural Network (CNN) for the semantic search, and visualises the generated summary as a compass. Two commonly used datasets were chosen for the evaluation: UTEgo egocentric video and EDUB lifelog.
| Original language | English |
|---|---|
| Number of pages | 6 |
| Publication status | Published - 2018 |
| Event | The International Image Processing Applications and Systems Conference - Sophia Antipolis, France Duration: 12 Dec 2018 → 14 Dec 2018 |
Conference
| Conference | The International Image Processing Applications and Systems Conference |
|---|---|
| Abbreviated title | IPAS |
| Country/Territory | France |
| City | Sophia Antipolis |
| Period | 12/12/18 → 14/12/18 |
Fingerprint
Dive into the research topics of 'Selective keyframe summarisation for egocentric videos based on semantic concept search'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver