Cross-modal Retrieval Models for Stripped Binary Analysis
By: Guoqiang Chen , Lingyun Ying , Ziyang Song and more
Potential Business Impact:
Finds hidden computer code problems faster.
LLM-agent based binary code analysis has demonstrated significant potential across a wide range of software security scenarios, including vulnerability detection, malware analysis, etc. In agent workflow, however, retrieving the positive from thousands of stripped binary functions based on user query remains under-studied and challenging, as the absence of symbolic information distinguishes it from source code retrieval. In this paper, we introduce, BinSeek, the first two-stage cross-modal retrieval framework for stripped binary code analysis. It consists of two models: BinSeekEmbedding is trained on large-scale dataset to learn the semantic relevance of the binary code and the natural language description, furthermore, BinSeek-Reranker learns to carefully judge the relevance of the candidate code to the description with context augmentation. To this end, we built an LLM-based data synthesis pipeline to automate training construction, also deriving a domain benchmark for future research. Our evaluation results show that BinSeek achieved the state-of-the-art performance, surpassing the the same scale models by 31.42% in Rec@3 and 27.17% in MRR@3, as well as leading the advanced general-purpose models that have 16 times larger parameters.
Similar Papers
Retrv-R1: A Reasoning-Driven MLLM Framework for Universal and Efficient Multimodal Retrieval
CV and Pattern Recognition
Helps computers find information better and faster.
Trim My View: An LLM-Based Code Query System for Module Retrieval in Robotic Firmware
Cryptography and Security
Lets computers understand old computer code's purpose.
Recurrence-Enhanced Vision-and-Language Transformers for Robust Multimodal Document Retrieval
CV and Pattern Recognition
Finds information using pictures and words together.