Score: 1

InferF: Declarative Factorization of AI/ML Inferences over Joins

Published: November 25, 2025 | arXiv ID: 2511.20489v1

By: Kanchan Chowdhury , Lixi Zhou , Lulu Xie and more

BigTech Affiliations: Amazon

Potential Business Impact:

Makes AI faster by avoiding repeated work.

Business Areas:
Predictive Analytics Artificial Intelligence, Data and Analytics, Software

Real-world AI/ML workflows often apply inference computations to feature vectors joined from multiple datasets. To avoid the redundant AI/ML computations caused by repeated data records in the join's output, factorized ML has been proposed to decompose ML computations into sub-computations to be executed on each normalized dataset. However, there is insufficient discussion on how factorized ML could impact AI/ML inference over multi-way joins. To address the limitations, we propose a novel declarative InferF system, focusing on the factorization of arbitrary inference workflows represented as analyzable expressions over the multi-way joins. We formalize our problem to flexibly push down partial factorized computations to qualified nodes in the join tree to minimize the overall inference computation and join costs and propose two algorithms to resolve the problem: (1) a greedy algorithm based on a per-node cost function that estimates the influence on overall latency if a subset of factorized computations is pushed to a node, and (2) a genetic algorithm for iteratively enumerating and evaluating promising factorization plans. We implement InferF on Velox, an open-sourced database engine from Meta, evaluate it on real-world datasets, observed up to 11.3x speedups, and systematically summarized the factors that determine when factorized ML can benefit AI/ML inference workflows.

Country of Origin
🇺🇸 United States

Page Count
21 pages

Category
Computer Science:
Databases