· 01:29
Hello and welcome, I’m your host. Today we’re exploring five impressive feats from DeepMind’s new “self-evolving AI” coding agent, AlphaEvolve. Powered by the very models it improves, AlphaEvolve writes code, tests it, and keeps the top performers to “evolve” better solutions over many cycles.
First, it tackled over 50 open math problems, improving on the best-known solutions in 20% of cases. In one breakthrough, it found a new lower bound for the 300-year-old kissing number problem in 11-dimensional space—593 spheres where experts hadn’t reached before.
Second, AlphaEvolve boosted Google’s data-center energy efficiency by 0.7% through smarter power scheduling. Third, it sped up Gemini’s training by 23% in matrix multiplications, cutting overall training time by 1%. Fourth, it co-designed part of Google’s next TPU chip, rewriting Verilog circuits for greater efficiency. And fifth, it outperformed Strassen’s 1969 algorithm for 4×4 matrix multiplication, using fewer scalar multiplications.
According to DeepMind, these are just the tip of the iceberg. From discovering new materials and drugs to streamlining operations, AlphaEvolve promises to redefine what AI can achieve.
Link to Article
Listen to jawbreaker.io using one of many popular podcasting apps or directories.