Selective keyframe summarisation for egocentric videos based on semantic concept search
Research output: Contribution to conference › Paper › peer-review
Electronic versions
Documents
- SelectiveVideoSummarisation
Proof, 1.54 MB, PDF document
Large volumes of egocentric video data are being continually collected every day. While the standard video summarisation approach offers all-purpose summaries, here we propose a method for selective video summarisation. The user can query the video with an unlimited vocabulary of terms. The result is a time-tagged summary of keyframes related to the query concept. Our method uses a pre-trained Convolutional Neural Network (CNN) for the semantic search, and visualises the generated summary as a compass. Two commonly used datasets were chosen for the evaluation: UTEgo egocentric video and EDUB lifelog.
Original language | English |
---|---|
Number of pages | 6 |
Publication status | Published - 2018 |
Event | The International Image Processing Applications and Systems Conference - Sophia Antipolis, France Duration: 12 Dec 2018 → 14 Dec 2018 |
Conference
Conference | The International Image Processing Applications and Systems Conference |
---|---|
Abbreviated title | IPAS |
Country/Territory | France |
City | Sophia Antipolis |
Period | 12/12/18 → 14/12/18 |
Total downloads
No data available