Chinese startup DeepSeek last week launched its open source AI model DeepSeek R1, which it claims performs as well as or even better than industry-leading generative AI models at a fraction of the cost, using far less energy.
DeepSeek says its DeepSeek V3 model - on which R1 is based - was trained for two months at a cost of $5.6 million. By contrast OpenAI is thought to have spent about $5 billion on development in the last year. DeepSeek also says its model uses 10 to 40 times less energy than similar US AI technology. In a paper on the model, the company said: “We introduce DeepSeek- R1, which incorporates multi-stage training and cold-start data before RL. DeepSeek R1 achieves performance comparable to OpenAI-o1-1217 on reasoning tasks.”
Much of the product and tech community reacted with both surprise and admiration to the launch.
Product coach Petra Wille said that what stood out for her was how DeepSeek has turned constraints into a catalyst for innovation. She said: “This is a powerful reminder that every business operates within a set of limitations - be they resource-related, regulatory, or market-driven. What differentiates successful teams and organisations is how they respond to those limitations. By embracing these challenges, DeepSeek was not only able to be innovative but also highly resource-efficient.”
She added that another striking aspect is the cultural shift toward open-source collaboration, even within competitive environments like AI, saying that the launch shows product leaders that collaboration and resource-sharing can be as valuable as proprietary innovation.
In a post on X, OpenAI CEO Sam Altman praised DeepSeek's model, saying that what is able to deliver is impressive for the price. He said: “We will obviously deliver much better models and also it's legit invigorating to have a new competitor! We will pull up some releases.”
Others have been less enthusiastic. V Squared founder and AI advisor Vin Vashishta posted on LinkedIn that DeepSeek is “a novelty with 0 utility”. He wrote: “The data required to train the models is a true moat… No one uses Deep Seek to do work. It’s not reliable enough, and guardrails are expensive. Even if it’s used, the open-source community can’t gather that data, so the LLM project won’t advance.”
Financial markets reacted with nervousness, as did investors who have ploughed huge amounts of capital into AI startups in recent years. The price of major tech stocks in the US plummeted, but largely recovered over a few days following the launch.
Pat Gelsinger, former CEO of Intel, posted on X that the financial markets had got their thinking wrong. He said: “Computing obeys the gas law. Making it dramatically cheaper will expand the market for it. The markets are getting it wrong, this will make AI much more broadly deployed. Engineering is about constraints. The Chinese engineers had limited resources, and they had to find creative solutions.”
Petra Wille also pointed out that the broader political context should not be ignored, saying that US export controls on advanced hardware and China's investment in AI are both part of a larger geopolitical chess match. She said: “DeepSeek's success, while inspiring, also reflects how innovation and competition are playing out on a massive, global scale. It's a reminder that technology doesn't operate in isolation - it's shaped by, and shapes, the world around it."
Comments
Join the community
Sign up for free to share your thoughts