TRAM: Training Approximate Multiplier Structures for Low-Power AI Accelerators
Researchers have developed TRAM, a technique that jointly optimizes low-power approximate multiplier structures with AI model training parameters, achieving up to 27% power reduction in vision transformers without significant accuracy loss. This approach differs from prior methods by integrating hardware design with model training rather than designing multipliers separately.