PlantBiMoE: A Bidirectional Foundation Model with SparseMoE for Plant Genomes
By: Kepeng Lin , Qizhe Zhang , Rui Wang and more
Potential Business Impact:
Helps scientists understand plant DNA faster.
Understanding the underlying linguistic rules of plant genomes remains a fundamental challenge in computational biology. Recent advances including AgroNT and PDLLMs have made notable progress although, they suffer from excessive parameter size and limited ability to model the bidirectional nature of DNA strands respectively. To address these limitations, we propose PlantBiMoE, a lightweight and expressive plant genome language model that integrates bidirectional Mamba and a Sparse Mixture-of-Experts (SparseMoE) framework. The bidirectional Mamba enables the model to effectively capture structural dependencies across both the forward and reverse DNA strands, while SparseMoE significantly reduces the number of active parameters, improving computational efficiency without sacrificing modeling capacity. We evaluated and tested our model on the Modified Plants Genome Benchmark (MPGB), an enhanced genomic benchmark, which consolidates 31 datasets across 11 representative tasks, with input sequence lengths ranging from 50 to 6,000 bp. Experimental results demonstrate that PlantBiMoE achieves the best performance on 20 out of 31 datasets and the average best when comparing with existing models. In summary, all above results demonstrate that our model can effectively represent plant genomic sequences, serving as a robust computational tool for diverse genomic tasks, while making substantive contributions to plant genomics, gene editing, and synthetic biology. The code is available at: https://github.com/HUST-Keep-Lin/PlantBiMoE
Similar Papers
MoBE: Mixture-of-Basis-Experts for Compressing MoE-based LLMs
Machine Learning (CS)
Makes big AI models smaller without losing smarts.
MoMoE: A Mixture of Expert Agent Model for Financial Sentiment Analysis
Computational Engineering, Finance, and Science
Makes AI smarter by letting many AI parts work together.
DualSparse-MoE: Coordinating Tensor/Neuron-Level Sparsity with Expert Partition and Reconstruction
Machine Learning (CS)
Makes smart computer programs run faster and better.