Last week, we talked about the recent model release of the Chinese AI company Deepseek and why they were such technical results. The Deepseek team gives a big mileage from teaching to the model and gives a breakthrough of machine learning before to make quick and inexpensive improvements. It seems that they are quickly grasping whether they are.
I want to jump into related questions this week: Why are all talking about DeepSeek? It is called the moment of AI SPUTNIK in the United States. At the top of the iPhone app store, replace Openai's Chatgpt. The major AI company's CEO has posted it to X about it. Normally, people who ignore AI tell me, have you ever seen DeepSeek?
I have it, and don't misunderstand, it's a good model. However, Openai's most advanced model O1 and O3 are the same, and the current highest performance of ChatBot Arena leader boards is actually Gemini of Google (DeepSeek R1 is the fourth).
Everything raises doubts: Why do some AI development break through the general public?
Openai released a model (GPT 3.5) a few months before the release of Chatgpt in late 2022. Anyone can access GPT 3.5 for free by going to Openai's Sandbox, a website for experimenting the latest LLM.
The GPT 3.5 was a big move for a large language model. I was impressed by exploring what I could do. So was so many other people who followed the AI move forward. Still, no one else listened or discussed it.
When Openai launched Chatgpt, we reached 100 million users within just two months. This is a record. Chatgpt was exactly the same model as the GPT 3.5, which was almost inconspicuous. The difference is that instead of a “sandbox” with technical phrases and settings (do you want to “temperature” AI?), It's the front and rear chatbots, and the interface is familiar to people. It is an interface, and I have entered text in a box on a computer.
It was not technology that promoted Chatgpt's quick recruitment. That was the format presented. And I think it's the same phenomenon as driving our current DeepSeek enthusiasm.
Deepseek R1 is not the best AI. Unlike Openai and human models, as an open model, it is a major problem for the open source community, and the geopolitical meaning as a clear evidence that China remains more than AI development. That's a big problem. But it's not a description for the enthusiasm that DeepSeek is at the top of the app store or that people think they have it.
I think that promoting its extensive recruitment is a visible inference to reach the answer. It is the first one with a series of thought chains packaged in friendly chat bot user interfaces. People love seeing DeepSeek's thinking loudly. They talk about how they witness it to learn how to trust it and encourage it to be more likely to witness it. (Ironically, it also makes the model -led censorship more noticeable -don't ask about Taiwan -but I ultimately make the same ideology censorship more subtle. I think it's good to compare).
In contrast, Openai made a decision when releasing the O1 (performing similar thinking and inference before creating an answer) so as not to make the “thinking process” public. 。 This is probably for some reasons -it is a corporate secret for one, and the model is much more likely to “slip” and break safety rules than to do so in the final answer. (Certainly, there are a lot of videos of Deepseek R1, and China is critically critical before China notices the problem and backtrack.)
However, the thinking process will do something similar to what the chat interface has done and a typical user. AI is easier to access, interactively, and does not cause confusion. It's not a major difference in basic products, but it's a big difference between those who tend to use products.
“Looking at the reasoning (how serious you know what you know or what you may not know) will greatly increase the trust of the user,” he said. GARRY TAN is written.
AI was improved after check -in last
Let's respond quickly to some of the most prominent Deepseek misunderstandings. No, it doesn't mean that all the money that US companies are participating have been wasted. DeepSeek has demonstrated that it can do more than those who can think of it with less resources (you can claim it at face value, but you can do more with more resources.
DeepSeek may be an existing challenge for the meta. Meta has been trying to open up a cheaper open source model niche, which can threaten Openai short -term business models. However, AI's long -term business model has always automated all the work performed on the computer, and Deepseek is not the reason to consider it more difficult or commercially valuable.
Another thing that promotes DeepSeek Frenzy is easy. Most people are not AI's power users and have not witnessed two years of progress since Chatgpt was first released. However, for these two years, AI has dramatically improved in almost all measurable metrics, especially for a frontier model that is too high for average users.
Therefore, if you are checking in for the first time after hearing that there is a new AI that the new AI is talking about, the last model used was a free version of Chatgpt. And it's a very good model, but most of the story has improved all models in the past two years.
At the beginning of this year, I wrote that it was very fast, whether like to pay attention to AI, and is in a position to change our world.
That's why it's a good thing every time a new virus AI app persuads people to see the technology again. In order to determine what policies approach you want to take the AI, it is not a technology that quickly moves it, but it is not possible to infer two years of strength and restrictions. It is a great news, as DeepSeek has stopped the world of 2025 how it is different from the world of 2023 and notices.
I think it's a great news, as DeepSeek stimulates a general panic about China. The Chinese Communist Party is an authoritarian who systematically mistakes both its own citizens and other worlds. I do not want more geopolitical power from the cruel war of AI or Taiwan of conquests, or from the United States, which abandons all our global alliance. However, the AI race is not like a nuclear competition. Because there was no risk that nuclear weapons would decide to incorporate problems in their hands.
Experts claim to be very emphasized, and if they do a bad job to design a big number of AI agents, which act independently in the world, they literally come to the world to the world. He warns that it may be dominated. (Are we so careless? Yes, absolutely -we are working on it!)
Many people who are nervous about this situation are challenging pathological humor. I read the popular X post, “Please call me a nationalist.” “But I hope that the AI that turns me into paper clips is made in the United States,” but let's be serious here. China does not want to destroy the world. There are signs that US organizations recommend most of the safety measures taken in a US lab. These measures are currently completely inadequate, but if you adopt appropriate measures, you may copy them well.
We are in a real geopolitical competition, but with a huge giant bet, we can't afford to lose sight of a common foundation, and we are willing to capture control from us. You cannot create geopolitical entities. Common round.
The version of this story originally appeared in future perfect news letters. Sign up here!