Great piece Alex! I hadn’t heard about Ilya Sutskever’s comments before, but the way you’ve laid out the implications for the AI world is really thought-provoking.
An interesting article came out yesterday in the Financial Times (https://www.ft.com/content/e91cb018-873c-4388-84c0-46e9f82146b4), where OpenAI’s CFO highlighted that “in 74 days (since June), we put ten billion of liquidity on the balance sheet. […] We’re in a massive growth phase, it behoves us to keep investing. We need to be on the frontier on the model front. That is expensive.”
Considering your analysis Alex, I can’t help but ask myself… why would it still be that expensive? The classic trifecta - chips, data, and energy - comes to mind.
You mentioned that energy efficiency could become a major focus, which aligns with these changing priorities. As for data, it might indeed take a backseat in pure training efforts, with quality taking precedence over quantity.
This suggests that while the liquidity these companies are amassing may not be necessary for funding superior AI performance, it could be used to build a strong infrastructure that cements their positions as first movers. If that’s the case, smaller AI players looking to "create tailored solutions for specific workflows," as you put it, could face significant challenges competing with the scale and resources of these industry giants. I completely agree with you that “achieving best-in-class performance may no longer require billions of dollars to scale these models,” but staying competitive as the rules shift away from data-focused strategies might still demand it.
Let’s see how things unfold, but I really share your hope that we’ll start seeing more smaller AI players emerge!
Great piece Alex! I hadn’t heard about Ilya Sutskever’s comments before, but the way you’ve laid out the implications for the AI world is really thought-provoking.
An interesting article came out yesterday in the Financial Times (https://www.ft.com/content/e91cb018-873c-4388-84c0-46e9f82146b4), where OpenAI’s CFO highlighted that “in 74 days (since June), we put ten billion of liquidity on the balance sheet. […] We’re in a massive growth phase, it behoves us to keep investing. We need to be on the frontier on the model front. That is expensive.”
Considering your analysis Alex, I can’t help but ask myself… why would it still be that expensive? The classic trifecta - chips, data, and energy - comes to mind.
You mentioned that energy efficiency could become a major focus, which aligns with these changing priorities. As for data, it might indeed take a backseat in pure training efforts, with quality taking precedence over quantity.
And chips? That’s where it might get intriguing (and indeed expensive). OpenAI is reportedly working with Broadcom and TSMC on its first in-house chip and investing heavily in building clusters of data centers across the US (https://www.reuters.com/technology/artificial-intelligence/openai-builds-first-chip-with-broadcom-tsmc-scales-back-foundry-ambition-2024-10-29/).
This suggests that while the liquidity these companies are amassing may not be necessary for funding superior AI performance, it could be used to build a strong infrastructure that cements their positions as first movers. If that’s the case, smaller AI players looking to "create tailored solutions for specific workflows," as you put it, could face significant challenges competing with the scale and resources of these industry giants. I completely agree with you that “achieving best-in-class performance may no longer require billions of dollars to scale these models,” but staying competitive as the rules shift away from data-focused strategies might still demand it.
Let’s see how things unfold, but I really share your hope that we’ll start seeing more smaller AI players emerge!
Thanks for the interesting perspective - you mention some great points. Let's definitely chat more regarding this next time we catch up!