Pruning in Snowflake: Working Smarter, Not Harder
By: Andreas Zimmerer , Damien Dam , Jan Kossmann and more
Potential Business Impact:
Skips unneeded data, making searches much faster.
Modern cloud-based data analytics systems must efficiently process petabytes of data residing on cloud storage. A key optimization technique in state-of-the-art systems like Snowflake is partition pruning - skipping chunks of data that do not contain relevant information for computing query results. While partition pruning based on query predicates is a well-established technique, we present new pruning techniques that extend the scope of partition pruning to LIMIT, top-k, and JOIN operations, significantly expanding the opportunities for pruning across diverse query types. We detail the implementation of each method and examine their impact on real-world workloads. Our analysis of Snowflake's production workloads reveals that real-world analytical queries exhibit much higher selectivity than commonly assumed, yielding effective partition pruning and highlighting the need for more realistic benchmarks. We show that we can harness high selectivity by utilizing min/max metadata available in modern data analytics systems and data lake formats like Apache Iceberg, reducing the number of processed micro-partitions by 99.4% across the Snowflake data platform.
Similar Papers
Shaved Ice: Optimal Compute Resource Commitments for Dynamic Multi-Cloud Workloads
Distributed, Parallel, and Cluster Computing
Saves money on computer power you rent.
Compiling Set Queries into Work-Efficient Tree Traversals
Programming Languages
Speeds up finding information in big data piles.
Swift Cross-Dataset Pruning: Enhancing Fine-Tuning Efficiency in Natural Language Understanding
Computation and Language
Makes training computer language models faster.