Visual Lifelog Retrieval through Captioning-Enhanced Interpretation
By: Yu-Fei Shih , An-Zi Yen , Hen-Hsen Huang and more
Potential Business Impact:
Find old photos by typing what you remember.
People often struggle to remember specific details of past experiences, which can lead to the need to revisit these memories. Consequently, lifelog retrieval has emerged as a crucial application. Various studies have explored methods to facilitate rapid access to personal lifelogs for memory recall assistance. In this paper, we propose a Captioning-Integrated Visual Lifelog (CIVIL) Retrieval System for extracting specific images from a user's visual lifelog based on textual queries. Unlike traditional embedding-based methods, our system first generates captions for visual lifelogs and then utilizes a text embedding model to project both the captions and user queries into a shared vector space. Visual lifelogs, captured through wearable cameras, provide a first-person viewpoint, necessitating the interpretation of the activities of the individual behind the camera rather than merely describing the scene. To address this, we introduce three distinct approaches: the single caption method, the collective caption method, and the merged caption method, each designed to interpret the life experiences of lifeloggers. Experimental results show that our method effectively describes first-person visual images, enhancing the outcomes of lifelog retrieval. Furthermore, we construct a textual dataset that converts visual lifelogs into captions, thereby reconstructing personal life experiences.
Similar Papers
The State-of-the-Art in Lifelog Retrieval: A Review of Progress at the ACM Lifelog Search Challenge Workshop 2022-24
Multimedia
Helps computers find memories from your life.
lifeXplore at the Lifelog Search Challenge 2020
Multimedia
Find specific moments in your life videos.
OpenLifelogQA: An Open-Ended Multi-Modal Lifelog Question-Answering Dataset
Multimedia
Lets you ask your life's recorded memories questions.