MIT researchers release CompreSSM to compress AI models during training

This month, researchers at MIT CSAIL released CompreSSM, a new method that compresses artificial intelligence models during the training process.
For UK businesses, this breakthrough effectively severs the link between high AI performance and massive cloud computing bills, making it viable to run powerful, custom models locally.
The technique delivered up to four times faster training speeds on popular architectures, shrinking models to a fraction of their size without losing accuracy.
MIT cuts AI training compute by 4x
On 9 April, MIT announced a new technique called CompreSSM that forces AI models to shed unnecessary weight during the learning process itself.
Traditionally, developers faced a hard choice. They could spend heavily to train a massive model and prune it later, or they could train a cheap, small model that performs poorly. CompreSSM bypasses this trade-off entirely. Using control theory, the system evaluates which parts of a model are actually contributing to its output. Crucially, it identifies
Get our UK AI insights.
Practical reads on AI for UK businesses — teardowns, how-to guides, regulatory news. Unsubscribe anytime.
Unsubscribe anytime.