Announcing our Seed Round

Wafer has raised $4 million in seed funding led by Fifty Years to build AI that optimizes AI infrastructure.

April 14, 2026·Emilio Andere
Wafer seed round announcement

We're thrilled to announce Wafer has raised $4 million in seed funding led by Fifty Years with participation from Liquid2 and Y Combinator.

Additionally, we are thrilled to bring on incredible angel investors, including Jeff Dean (Chief Scientist, Google), Wojciech Zaremba (Cofounder, OpenAI), Arash Ferdowsi (Cofounder of Dropbox), Dan Fu (Head of Kernels, Together), Kawal Gandhi (Office of the CTO, Google), Alfredo Andere (Cofounder & CEO, Latch Bio), Mokshith Voodarla (Cofounder & CEO, Sieve), Max Buckley (Head of Knowledge Research, Exa), Tarun Chitra (Founder & CEO, Gauntlet), and others.

With this investment, we will accelerate the development of our AI performance engineering agent. This will allow hardware providers, cloud providers, and frontier labs to push the limits of their hardware, closing the massive gap between the performance of AI systems today and what's physically possible.


Why We Started Wafer

AI has crossed the human baseline for intellectual work domain after domain. The world's hardest problems are information problems, and for the first time, human intelligence is no longer the limit. The only remaining limit is the cost of intelligence per unit of energy. We call this intelligence per watt. Every order of magnitude increase in intelligence per watt expands the set of solvable problems.

There's an enormous gap between how fast AI systems run today and what the hardware is actually capable of. The engineers who close that gap—performance engineers—are the bottleneck. There are maybe a few hundred people in the world who can deeply optimize accelerator hardware, and every frontier AI lab, chip vendor, and cloud provider fights over them. This problem is getting worse as hardware upgrades are happening more often and the ecosystem is diversifying. The work performance engineers do is painstaking and deep: profiling, reading traces, running experiments, mapping instructions back to source code. It's slow, manual, and doesn't scale.

Compilers will not come to save us this time. New hardware architectures ship faster than compilers and kernel libraries can keep up. Hyperscalers, labs, and alternative hardware startups are developing their own chips. Every different chip needs an optimized stack rewritten to unlock its full performance. The optimization space is too large, too hardware-specific, and changes too fast for compilers to close the gap.

We started Wafer because we believe the only compounding way to close this gap is to build AI that optimizes AI infrastructure. We're starting this vision by building an agent that can act as a performance engineer for various hardware architectures.

Steven and I have been friends since our freshman year at UChicago, where we were roommates our senior year and came up with the idea for Wafer. Steven had worked on infra for Bard at Google and did performance work at Two Sigma. I trained weather models at Argonne, and published ML security research at NeurIPS.

Our mission is simple: maximize intelligence per watt. Make AI orders of magnitude more efficient per unit of energy spent to produce it. We see this as the most essential piece of technology for the future we want to create of radical abundance.


We are just getting started. If you're looking to do your life's work towards the most important AI infrastructure problems of the next decade, reach out. And if you're already working on AI infrastructure, GPU performance, or kernel engineering, we'd love to hear from you too: emilio@wafer.ai

— Emilio, Steven, John, Ian, Danny