When Better Eyes Lead to Blindness: A Diagnostic Study of the Information Bottleneck in CNN-LSTM Image Captioning Models
By: Hitesh Kumar Gupta
Potential Business Impact:
Teaches computers to describe pictures with words.
Image captioning, situated at the intersection of computer vision and natural language processing, requires a sophisticated understanding of both visual scenes and linguistic structure. While modern approaches are dominated by large-scale Transformer architectures, this paper documents a systematic, iterative development of foundational image captioning models, progressing from a simple CNN-LSTM encoder-decoder to a competitive attention-based system. This paper presents a series of five models, beginning with Genesis and concluding with Nexus, an advanced model featuring an EfficientNetV2B3 backbone and a dynamic attention mechanism. The experiments chart the impact of architectural enhancements and demonstrate a key finding within the classic CNN-LSTM paradigm: merely upgrading the visual backbone without a corresponding attention mechanism can degrade performance, as the single-vector bottleneck cannot transmit the richer visual detail. This insight validates the architectural shift to attention. Trained on the MS COCO 2017 dataset, the final model, Nexus, achieves a BLEU-4 score of 31.4, surpassing several foundational benchmarks and validating the iterative design process. This work provides a clear, replicable blueprint for understanding the core architectural principles that underpin modern vision-language tasks.
Similar Papers
Tell Me What You See: An Iterative Deep Learning Framework for Image Captioning
CV and Pattern Recognition
Makes computers describe pictures better.
Beyond RNNs: Benchmarking Attention-Based Image Captioning Models
CV and Pattern Recognition
Lets computers describe pictures with words.
An Ensemble Model with Attention Based Mechanism for Image Captioning
CV and Pattern Recognition
Makes computers describe pictures better.