A Surveillance Based Interactive Robot
By: Kshitij Kavimandan, Pooja Mangal, Devanshi Mehta
Potential Business Impact:
Robot obeys voice commands, streams video to your phone.
We build a mobile surveillance robot that streams video in real time and responds to speech so a user can monitor and steer it from a phone or browser. The system uses two Raspberry Pi 4 units: a front unit on a differential drive base with camera, mic, and speaker, and a central unit that serves the live feed and runs perception. Video is sent with FFmpeg. Objects in the scene are detected using YOLOv3 to support navigation and event awareness. For voice interaction, we use Python libraries for speech recognition, multilingual translation, and text-to-speech, so the robot can take spoken commands and read back responses in the requested language. A Kinect RGB-D sensor provides visual input and obstacle cues. In indoor tests the robot detects common objects at interactive frame rates on CPU, recognises commands reliably, and translates them to actions without manual control. The design relies on off-the-shelf hardware and open software, making it easy to reproduce. We discuss limits and practical extensions, including sensor fusion with ultrasonic range data, GPU acceleration, and adding face and text recognition.
Similar Papers
A Modular AIoT Framework for Low-Latency Real-Time Robotic Teleoperation in Smart Cities
Robotics
Lets robots be controlled from far away.
A Modular Object Detection System for Humanoid Robots Using YOLO
Robotics
Helps robots see better and faster.
Autonomous AI Surveillance: Multimodal Deep Learning for Cognitive and Behavioral Monitoring
CV and Pattern Recognition
Spots students sleeping or on phones.