Deepseek’s R1 Artificial Intelligence Model could not pass the security test

Deepseek’s R1 Artificial Intelligence Model could not pass the security test





Deepseek's R1 Artificial Intelligence Model could not pass the security test




See in full size


Artificial intelligence A new concern has emerged in terms of the security of technologies. Researchers from the University of Cisco and Pennsylvania, Chinese -based artificial intelligence company DeepSeek’in new In R1 model important security gaps detected. Researchers announced that all of the security measures designed to prevent harmful content of the model failed.

Deepseek’s security deficits worry the researchers

Within the scope of the research, HarmBench Selected from the standard evaluation library called 50 DIFFERENT MEATURAL PURCHASES tested. The results were quite striking. Because Deepseek R1 has failed to block all of the harmful content tested. This situation, as the researchers say y100 attack success rate means.

Cisco’s Product, Artificial Intelligence Software and Platform Vice President DJ Sampathpointed out the seriousness of the situation, and said that these results reveal the balance between cost and security. According to Sampath, an effort to develop a more cost -effective model may have led to ignoring the necessary security measures.




Deepseek's R1 Artificial Intelligence Model could not pass the security test




See in full size


An independent analysis conducted by AI Security Company Adverse AI has reached similar results. CEO of the company Alex PolyakovFrom simple language tricks of the Deepseek model to the commands created by complex artificial intelligence. jailbreak He confirmed that he was vulnerable to his techniques.

One of the most important dimensions of security gaps is the model indirectly fast injection attacks to be weak against threats known as. Such attacks target the way artificial intelligence systems process the data received from the outsourced sources and cause the system to overcome safety checks.

Deepseek’s situation points to a growing problem in the artificial intelligence industry. Large technology companies such as OpenAI and Meta constantly strengthen the safety of their models, while new players enter the market with inconsistencies in security standards. Deepseek has not yet made any explanation for these findings.