DeepSeek-R1, an AI model developed by DeepSeek—a relatively unknown Chinese company—has taken the AI industry by storm. Built at a division of the cost compared to OpenAI’s models, DeepSeek-R1 has started serious discussions over the past few days. At first, numerous framed this improvement as another chapter in the progressing U.S.-China AI rivalry.But the truth is more complex than geopolitics. There are geopolitical suggestions, but the most noteworthy finding is the developing impact of open-source AI and its danger to the authority of exclusive, closed-source models. Yann LeCun, the best AI scientist of Meta and a longtime supporter of open-source AI, offers this opinion.
LeCun recently took to LinkedIn to clarify the narrative, stating:
“To people who see the performance of DeepSeek and think: ‘China is surpassing the U.S. in AI.’ You are reading this wrong. The correct reading is: ‘Open-source models are surpassing proprietary ones.’”
This statement, while seemingly straightforward, carries profound implications. DeepSeek-R1 did not emerge in isolation—it stands on the shoulders of prior open-source research, drawing upon innovations like Meta’s Llama models and the PyTorch framework. Its success underscores a crucial shift: open-source AI is no longer playing catch-up; it is actively competing with, and in some cases surpassing, the closed AI models of major tech companies.
Read More: DeepSeek’s Breakthrough: How a Chinese AI Startup Disrupted Global Markets
Open-Source vs. Closed AI: A Battle for the Future
The Open Source Initiative defines open-source AI as a system that grants users complete freedom—to utilize, study, modify, and distribute the model however they see fit. In simpler terms, open-source AI provides unrestricted access to its underlying architecture, akin to having an unlimited supply of raw materials for a chef to experiment with in the kitchen.
By contrast, closed-source AI is the opposite. Proprietary models keep their source code locked away, preventing modifications or external development. Supporters of closed AI argue that this approach enhances security and prevents data misuse, ensuring user privacy.
However, Manu Sharma, cofounder and CEO of Labelbox, challenges this notion. He believes that keeping AI software closed is becoming increasingly difficult, stating:
“Almost every foundational piece of AI technology is open source and has gained significant traction. The same trend we saw with databases and operating systems is now playing out in AI—open solutions are gradually taking over.”
With proprietary AI requiring massive investment in data acquisition and computing power, many companies are now turning to open-source alternatives, which offer cost-effective and flexible solutions.
DeepSeek-R1’s training cost—reportedly just $6 million—has left many in the industry stunned, especially when compared to the billions spent by OpenAI, Google, and Anthropic on their cutting-edge models. Kevin Surace, CEO of Appvance, called this a “wake-up call,” pointing out that
“China has focused on low-cost, efficient AI models, while the U.S. has pursued massive, high-cost models.”
The AI Price War: Is It Inevitable?
DeepSeek-R1 raises a crucial question: Are we on the verge of an AI price war?
Even OpenAI’s CEO, Sam Altman, acknowledged DeepSeek’s breakthrough, stating in a tweet:
“DeepSeek’s R1 is an impressive model, particularly in terms of what they’ve achieved at such a low cost.”
Andy Thurai, VP and principal analyst at Constellation Research, also weighed in, predicting that DeepSeek’s efficiency will inevitably drive AI costs downward. In his newsletter, he noted:
“If open-source AI proves it can deliver high-performance models at a fraction of the cost, venture capitalists may start pulling back on investments in AI startups that rely solely on closed-source models.”
This shift could have severe consequences for AI firms whose valuations are tied to their ability to train proprietary models. Startups that fail to offer unique value beyond their LLMs may struggle to secure funding in the wake of DeepSeek’s disruption.
Read More: Is DeepSeek better than ChatGpt?
Privacy and Security Concerns
Despite the enthusiasm around open-source AI, it comes with its own set of challenges, particularly regarding security, misuse, and privacy.
Kevin Surace voiced concerns over DeepSeek’s Chinese origins, stating:
“Privacy is a major issue—it’s China. Data collection is always a concern. Users should be cautious.”
Although DeepSeek has made its model weights and code publicly available, its training data sources remain unclear. This lack of transparency raises questions about potential biases and security vulnerabilities.
Syed Hussain and Neil Benedict, cofounders of Shiza.ai, share these concerns. They argue that DeepSeek might not just be another AI company, but rather part of a larger strategy to challenge U.S. dominance in AI.
Benedict emphasized that while U.S. companies also collect vast amounts of user data, they are subject to legal constraints under U.S. privacy laws. In contrast, DeepSeek operates under a different regulatory framework, which raises concerns about its true motivations.
Hussain went further, warning that DeepSeek could be a “Trojan horse,” potentially acting as a sophisticated data collection mechanism disguised as an AI product.
Andy Thurai, however, believes transparency should be a priority for all AI models, regardless of their country of origin. He asserts:
“When selecting an AI model, transparency, model development processes, and auditability should matter more than just the cost of usage.”
For businesses considering adopting DeepSeek-R1, the lack of clarity around its dataset and safety measures remains a significant hurdle.
The Impact on Nvidia and the AI Supply Chain
Financial markets have already responded to DeepSeek’s emergence. Nvidia, a key supplier of AI chips, experienced temporary stock volatility amid concerns that cheaper AI models could reduce demand for its high-end GPUs. However, the stock has since rebounded by 6%.
Despite short-term uncertainties, Manu Sharma remains optimistic about Nvidia’s long-term position. He argues:
“The rise of affordable AGI will only accelerate AI adoption. The demand for inference computing—running AI models efficiently at scale—will soar.”
This suggests that while training costs may decrease, the demand for optimized AI hardware will continue to grow. Nvidia and other chip manufacturers may focus on enhancing inference capabilities rather than solely investing in large-scale model training.
The Future of Open-Source AI
DeepSeek-R1 has made one clear: high-performance open-source AI is here to stay—and it might be the dominant force shaping the industry’s future.
As LeCun pointed out:
“DeepSeek has built on open research and open-source tools like PyTorch and Llama. Because their work is open, everyone can benefit from it. That is the power of open research and open-source AI.”
Businesses must now reconsider their reliance on closed AI models and explore the benefits of open collaboration. The real debate moving forward won’t just be about the U.S. vs. China in AI dominance, but rather:
- Will AI development become open, accessible, and shared?
- Or will it remain closed, proprietary, and expensive?
One thing is certain: the genie is out of the bottle—and it’s open-source.