Score: 1

Web Page Classification using LLMs for Crawling Support

Published: May 11, 2025 | arXiv ID: 2505.06972v1

By: Yuichi Sasazawa, Yasuhiro Sogawa

Potential Business Impact:

Finds new web pages faster by sorting them.

Business Areas:
Search Engine Internet Services

A web crawler is a system designed to collect web pages, and efficient crawling of new pages requires appropriate algorithms. While website features such as XML sitemaps and the frequency of past page updates provide important clues for accessing new pages, their universal application across diverse conditions is challenging. In this study, we propose a method to efficiently collect new pages by classifying web pages into two types, "Index Pages" and "Content Pages," using a large language model (LLM), and leveraging the classification results to select index pages as starting points for accessing new pages. We construct a dataset with automatically annotated web page types and evaluate our approach from two perspectives: the page type classification performance and coverage of new pages. Experimental results demonstrate that the LLM-based method outperformed baseline methods in both evaluation metrics.

Repos / Data Links

Page Count
8 pages

Category
Computer Science:
Information Retrieval