Deep Learning-Driven Multimodal Detection and Movement Analysis of Objects in Culinary
By: Tahoshin Alam Ishat, Mohammad Abdul Qayum
Potential Business Impact:
Cooks follow recipes by watching and listening.
This is a research exploring existing models and fine tuning them to combine a YOLOv8 segmentation model, a LSTM model trained on hand point motion sequence and a ASR (whisper-base) to extract enough data for a LLM (TinyLLaMa) to predict the recipe and generate text creating a step by step guide for the cooking procedure. All the data were gathered by the author for a robust task specific system to perform best in complex and challenging environments proving the extension and endless application of computer vision in daily activities such as kitchen work. This work extends the field for many more crucial task of our day to day life.
Similar Papers
LLMs-based Augmentation for Domain Adaptation in Long-tailed Food Datasets
CV and Pattern Recognition
Lets phones know what food you're eating.
AI-Driven Relocation Tracking in Dynamic Kitchen Environments
CV and Pattern Recognition
Robot finds lost items in messy kitchens.
Object Detection with Multimodal Large Vision-Language Models: An In-depth Review
CV and Pattern Recognition
Lets computers see and understand pictures better.