This AI thinks it’s ChatGPT

This AI thinks it’s ChatGPT

By admin, Aralık 30, 2024








See Full Size


China-based artificial intelligence laboratory DeepSeekWith a remarkable development last week, the new artificial intelligence model DeepSeek V3He announced. This model outperforms leading models from OpenAI and Meta, delivering impressive performance on text-based tasks such as typing and coding. However, DeepSeek V3 has another interesting side: Chat thinks it’s GPT.

When asked, DeepSeek V3 claims to be the GPT-4 model that OpenAI is releasing in 2023. The model even shares OpenAI’s API usage instructions in a question that will provide information about DeepSeek’s API. Moreover, it does the same jokes of ChatGPT. So what’s behind this strangeness?

“Contamination” of training data

AI models are trained on millions of data samples to learn language patterns and make predictions. DeepSeek V3 is based on large language models (LLM) based artificial intelligence models. These models are trained on huge data sets (texts, books, videos, images, etc.) that companies generally do not disclose. The resulting models provide answers to the questions asked, based on a statistical calculation based on these data.

Since DeepSeek does not disclose training data for DeepSeek V3, it is difficult to determine why the model says it is ChatGPT. However, this indicates that the model may have been trained on public data generated by GPT-4 via ChatGPT. If this data is included in the model, DeepSeek V3 is likely memorizing and repeating some of ChatGPT’s output.

Training an artificial intelligence model on the data of another artificial intelligence model is not a desirable thing in terms of quality. This can lead to loss of information, like photocopying a photocopy. Moreover, this practice may also violate OpenAI’s terms of service. It can also increase hallucinations, as in the example.

CEO’s OpenAI Sam Altmandirectly DeepSeekAlthough he did not target , he made a post that appeared to be a reference to such practices: “It’s (relatively) easy to copy something you know works. But it’s incredibly difficult to do something new, risky, and difficult when you don’t know if it will work.”

Another problem is that the internet is increasingly filled with content produced by artificial intelligence. Clickbait articles and bots using AI are flooding the data sets. According to one estimate, 90 percent of content on the internet will be produced by artificial intelligence by 2026. This will make it difficult to find “quality” data in training new models.














Download DH iOS App