🥛 The most important AI IPO? 👀
AI Summary
Cerebras Systems is listing on Nasdaq under ticker CBRS on May 14, 2026, with its IPO 20x oversubscribed, driven by its Wafer-Scale Engine chip that is 56x larger than Nvidia's H100 and 7,000x faster in memory bandwidth. The newsletter argues the AI industry is shifting from training to inference, a market projected to grow from $106B to $255B by 2030, positioning Cerebras favorably. However, significant risks include 86% revenue concentration from UAE-tied entities, a conditional $20B OpenAI contract, and a complex governance structure involving Sam Altman.
Key Facts
Author Takes
Cerebras Systems IPO valuation
Despite the engineering marvel and oversubscription frenzy, Cerebras trades at 91x trailing revenue, 86% of revenue is from UAE-tied entities, and the $20B OpenAI contract is conditional—making the valuation extremely risky.
AI industry shift from training to inference
The phase of building the biggest AI brain is largely done; the inference market growing from $106B to $255B by 2030 is the whole game, and Cerebras arrived at the IPO window right as this shift accelerates.
Micron (MU) as AI infrastructure trade
The author took a massive position in MU about a month ago and it has already gone up over 100%, as memory is becoming one of the real bottlenecks of the AI economy.
Sam Altman's governance role at Cerebras
Sam Altman being simultaneously a personal investor in Cerebras, CEO of its largest customer, and connected as a creditor is exactly the kind of relationship structure that makes institutional investors and compliance teams nervous.
Rising HBM memory prices benefiting Cerebras pitch
As MU and SK Hynix go higher, NVIDIA-based infrastructure gets more expensive to build and operate, making the Cerebras pitch—a chip that sidesteps HBM costs entirely—easier to make to every CFO.
Contrarian Angle
Wafer-Scale Chip: Don't Cut the Silicon
Cerebras Systems refused to follow 60 years of semiconductor convention by not dicing silicon wafers into individual chips, instead building one massive 46,225mm² chip that eliminates memory bandwidth bottlenecks critical for AI inference.
Every major chip company—Nvidia, AMD, Intel—dices wafers; Cerebras bets that keeping the entire wafer intact creates a fundamentally superior memory architecture for inference workloads that no conventional chipmaker can match.
Related topics
More from Milk Road AI
🥛 The trade I’m making right now 👀
The author explains their decision to rotate out of Robinhood stock into Oracle, citing Robinhood's over-reliance on volatile crypto transaction reven
🥛 The scariest Amazon launch yet 😳
Amazon launched Amazon Supply Chain Services (ASCS), opening its 13B-item-per-year logistics network to any business, mirroring its AWS strategy of mo
🥛 The real AI winner 🧠
Amazon's AI revenue reached $15B annual run rate, the fastest commercial adoption in Amazon's history, while the company plans $200B in capex for 2026