The Geopolitical Intelligence Group Weighs in on DeepSeek
As you can imagine, Academy’s Geopolitical Intelligence Group has been actively discussing DeepSeek over the past few days. Their input helped shape the T-Reports from Sunday (Amps to 11) and Monday (Gray Rhino).
Today, we have compiled some of the commentary from our Geopolitical Intelligence Group. It highlights how each individual brings their own experiences to their interpretation of the situation. While the overall theme is consistent, there are some nuances in the views that are also important to consider.
We also got to discuss this, among other issues, on Bloomberg TV this morning. From a markets perspective, I don’t like the fact that NVDL and SOXL (two leveraged ETFs) had large inflows yesterday, signaling a willingness to double down in uncertain times, rather than real fear (I’d prefer to see some evidence of capitulation). One question that we haven’t yet gotten an answer to is whether the “new” technology (if it exists) uses more, or less energy? The sell-off in utilities may have been wrong regardless of what we learn about DeepSeek, and the ability for it to be replicated domestically.
“I think that it’s important to note that DeepSeek (as a Chinese ‘Sputnik’ moment) is not the end of any path…it is just another point of interest on an existing path. ‘Tomorrow’ will see another compute threshold. Because they lacked core compute (chip restrictions), Chinese engineers were forced to ruthlessly optimize AI model efficiency to gain performance similar to the big AI models that American companies were building through brute strength (large scale compute). The lessons in optimization will likely directly and swiftly find themselves in the hands of all the hyper-scale AI model developers who will now seek efficiencies in their own models. This moves the industry from the path of ‘strength through compute’ to a path that also contributes ‘strength through efficiency.’ In either case it contributes to better systems. The Chinese model does not matter as much as the new perspective for optimization that will be adopted by ALL the big model builders. There are a number of industrial age comparisons where someone invented a new way to execute a process. The industry grabbed it, customized it, and improved it. It does not mean that the cycle stops here. Every model builder will start with this new baseline to create something better. The compute ‘demand’ for all the things that AI will do has continued to grow exponentially. This evolution does not change that. It now provides capabilities that can better ‘chase’ the bigger data commodity that a transforming industry/society will demand.
The ‘shock’ reflected in the marketplace is legitimate. This is a short-term surprise. But, I don’t think it invalidates the large-scale AI models, as the systemic growth in the data ecosystem remains well out of the reach of our compute capacity. It does not represent a ‘fixed’ amount of compute that will now be cheaper…it represents progress toward an ‘infinite’ amount of compute demand that we may never fully satisfy. ‘Price per bit’ may decrease, but the demand signal for even more bits will continue to grow exponentially. One thing is certain…the U.S. should not feel confident that denying Chinese (and other) builders a resource will stop their innovation. Necessity remains the mother of invention.” – General Michael Groen
“The swirl of attention around DeepSeek over the weekend is causing other U.S. AI and chip technology companies to take notice. Most are scrambling to assess if DeepSeek’s claims about capabilities and performance are in fact what they say. If true they could shake up the marketplace, but lots of ‘ifs’ need to be assessed. China is accelerating its capabilities in the AI space, but until more people and teams can test for themselves it is hard to project the way forward. The rush to try out DeepSeek ran into a snag on January 27th when a large purported cyberattack forced DeepSeek to limit new registrations. Maybe it is normal growing pains, but we need U.S.-based organizations to validate/benchmark the claims compared to other models.” – Nancy Morgan
“I think Michael and Nancy’s comments are spot on. The industry will absolutely need to ‘trust but verify’ the gains DeepSeek professes to have attained with its low cost, high performing, and low complexity chip design. We are in competition, not open conflict with China. The industry absolutely understands this and the dip in the markets from giants like NVIDIA and others bears this out. Importantly, regardless of the efficacy of DeepSeek’s internal logic, the development of a lower cost (possibly higher producing) language model will stimulate competition and provide a foundation for adaptation to take AI to the next level. This foundational model (like all others before it) will be built upon by everyone and enhanced by rigorous data integrity standards that are a mainstay of Western industry leaders, but are often lacking in China due to their centralized authoritarian culture. We can’t discuss AI without mentioning the increasing need for data centers and power generation for high performance computing (HPC). These compelling requirements for more consumable power from the energy markets (green or otherwise) will continue to accelerate worldwide.” – General John Evans