Towards Better Evaluation for Generated Patent Claims
By: Lekang Jiang, Pascal A Scherz, Stephan Goetz
Potential Business Impact:
Helps computers write patent claims like experts.
Patent claims define the scope of protection and establish the legal boundaries of an invention. Drafting these claims is a complex and time-consuming process that usually requires the expertise of skilled patent attorneys, which can form a large access barrier for many small enterprises. To solve these challenges, researchers have investigated the use of large language models (LLMs) for automating patent claim generation. However, existing studies highlight inconsistencies between automated evaluation metrics and human expert assessments. To bridge this gap, we introduce Patent-CE, the first comprehensive benchmark for evaluating patent claims. Patent-CE includes comparative claim evaluations annotated by patent experts, focusing on five key criteria: feature completeness, conceptual clarity, terminology consistency, logical linkage, and overall quality. Additionally, we propose PatClaimEval, a novel multi-dimensional evaluation method specifically designed for patent claims. Our experiments demonstrate that PatClaimEval achieves the highest correlation with human expert evaluations across all assessment criteria among all tested metrics. This research provides the groundwork for more accurate evaluations of automated patent claim generation systems.
Similar Papers
Towards Automated Quality Assurance of Patent Specifications: A Multi-Dimensional LLM Framework
Information Retrieval
Checks patents for mistakes, suggests fixes.
Towards Automated Quality Assurance of Patent Specifications: A Multi-Dimensional LLM Framework
Information Retrieval
Checks if computer-written patents are good.
Enriching Patent Claim Generation with European Patent Dataset
Computation and Language
Helps lawyers write better patent claims faster.