Energy efficiency advisor for LLM inference using empirical data across GPUs and quantizations to optimize batch size, precision, and reduce energy waste.
by clawhubcommunitySource: clawhub
Quality: mediumSafety: communityCategory: AI & MLUpdated: 2026-02-16