EnviroLLM: Resource Tracking and Optimization for Local AI
By: Troy Allen
Large language models (LLMs) are increasingly deployed locally for privacy and accessibility, yet users lack tools to measure their resource usage, environmental impact, and efficiency metrics. This paper presents EnviroLLM, an open-source toolkit for tracking, benchmarking, and optimizing performance and energy consumption when running LLMs on personal devices. The system provides real-time process monitoring, benchmarking across multiple platforms (Ollama, LM Studio, vLLM, and OpenAI-compatible APIs), persistent storage with visualizations for longitudinal analysis, and personalized model and optimization recommendations. The system includes LLM-as-judge evaluations alongside energy and speed metrics, enabling users to assess quality-efficiency tradeoffs when testing models with custom prompts.
Similar Papers
Benchmarking Energy Efficiency of Large Language Models Using vLLM
Software Engineering
Helps make AI use less electricity.
Energy-Aware LLMs: A step towards sustainable AI for downstream applications
Performance
Saves energy while making AI smarter.
RE-LLM: Integrating Large Language Models into Renewable Energy Systems
Machine Learning (CS)
Explains energy plans in simple words.