CRAM: Large-scale Video Continual Learning with Bootstrapped Compression
By: Shivani Mall, Joao F. Henriques
Potential Business Impact:
Stores many videos using less computer memory.
Continual learning (CL) promises to allow neural networks to learn from continuous streams of inputs, instead of IID (independent and identically distributed) sampling, which requires random access to a full dataset. This would allow for much smaller storage requirements and self-sufficiency of deployed systems that cope with natural distribution shifts, similarly to biological learning. We focus on video CL employing a rehearsal-based approach, which reinforces past samples from a memory buffer. We posit that part of the reason why practical video CL is challenging is the high memory requirements of video, further exacerbated by long-videos and continual streams, which are at odds with the common rehearsal-buffer size constraints. To address this, we propose to use compressed vision, i.e. store video codes (embeddings) instead of raw inputs, and train a video classifier by IID sampling from this rolling buffer. Training a video compressor online (so not depending on any pre-trained networks) means that it is also subject to catastrophic forgetting. We propose a scheme to deal with this forgetting by refreshing video codes, which requires careful decompression with a previous version of the network and recompression with a new one. We name our method Continually Refreshed Amodal Memory (CRAM). We expand current video CL benchmarks to large-scale settings, namely EpicKitchens-100 and Kinetics-700, storing thousands of relatively long videos in under 2 GB, and demonstrate empirically that our video CL method outperforms prior art with a significantly reduced memory footprint.
Similar Papers
VideoMem: Enhancing Ultra-Long Video Understanding via Adaptive Memory Management
CV and Pattern Recognition
Lets computers watch and remember long videos.
Unsupervised Video Continual Learning via Non-Parametric Deep Embedded Clustering
CV and Pattern Recognition
Teaches computers to learn from videos without labels.
Low-Complexity Inference in Continual Learning via Compressed Knowledge Transfer
Machine Learning (CS)
Makes smart computer models learn without forgetting.