Score: 2

RoMe: Row Granularity Access Memory System for Large Language Models

Published: December 1, 2025 | arXiv ID: 2512.01541v1

By: Hwayong Nam , Seungmin Baek , Jumin Kim and more

BigTech Affiliations: Meta

Potential Business Impact:

Speeds up computers by reading data in bigger chunks.

Business Areas:
Hardware Hardware

Modern HBM-based memory systems have evolved over generations while retaining cache line granularity accesses. Preserving this fine granularity necessitated the introduction of bank groups and pseudo channels. These structures expand timing parameters and control overhead, significantly increasing memory controller scheduling complexity. Large language models (LLMs) now dominate deep learning workloads, streaming contiguous data blocks ranging from several kilobytes to megabytes per operation. In a conventional HBM-based memory system, these transfers are fragmented into hundreds of 32B cache line transactions. This forces the memory controller to employ unnecessarily intricate scheduling, leading to growing inefficiency. To address this problem, we propose RoMe. RoMe accesses DRAM at row granularity and removes columns, bank groups, and pseudo channels from the memory interface. This design simplifies memory scheduling, thereby requiring fewer pins per channel. The freed pins are aggregated to form additional channels, increasing overall bandwidth by 12.5% with minimal extra pins. RoMe demonstrates how memory scheduling logic can be significantly simplified for representative LLM workloads, and presents an alternative approach for next-generation HBM-based memory systems achieving increased bandwidth with minimal hardware overhead.

Country of Origin
πŸ‡°πŸ‡· πŸ‡ΊπŸ‡Έ United States, Korea, Republic of

Page Count
15 pages

Category
Computer Science:
Hardware Architecture