Kaspersky introduces secure AI development guide at IGF 2024

Kaspersky introduces secure AI development guide at IGF 2024

By admin, Aralık 19, 2024

Kaspersky introduces secure AI development guide at IGF 2024

“Guide for the Safe Development and Implementation of Artificial Intelligence Systems” was presented at the IGF workshop titled “Cybersecurity in AI: Balancing Innovation and Risks” on December 18. Representatives of the Russian cybersecurity firm hosted an expert panel to discuss how innovation in artificial intelligence can be aligned with effective risk and cybersecurity management. This document was prepared in collaboration with leading academic experts to address increasingly complex cybersecurity issues related to AI-enabled systems.

SECURITY-FOCUSED DESIGN PRINCIPLES

This document is a critical resource for developers, administrators, and AI DevOps teams, providing detailed and practical recommendations to address technical vulnerabilities and operational risks. The guidance is especially important for organizations that rely on third-party artificial intelligence models and cloud-based systems. Security vulnerabilities in such systems can lead to serious data breaches and reputational loss.

Based on security-oriented design (security-by-design) principles, the guide helps organizations align their artificial intelligence applications with standards such as ESG and international compliance requirements. The document addresses critical elements in the development, implementation, and operational processes of AI systems, including the design, security best practices, and integration, without focusing on underlying model development.

The guide emphasizes the following principles to increase the security of artificial intelligence systems:

Kaspersky emphasizes the importance of leadership support and dedicated employee training. It is of great importance that employees are aware of the methods used by malicious actors to benefit from artificial intelligence services. Regularly updating training programs ensures compliance with evolving threats.

The guidelines emphasize the need to proactively identify and mitigate risks through threat modeling, which helps identify vulnerabilities in the early stages of AI development. Kaspersky recommends using established risk assessment methodologies (e.g. STRIDE, OWASP) to assess AI-specific threats such as model abuse, data poisoning, and system vulnerabilities.

Since AI systems are often used in cloud environments, stringent protection measures such as encryption, network segmentation, and two-factor authentication are required. To protect against breaches, Kaspersky emphasizes zero-trust principles, secure communication channels and regular patching of the infrastructure.

Kaspersky highlights the risks posed by third-party AI components and models, including data leaks and misuse of obtained information for resale. In this context, privacy policies and security practices for third-party services, such as the use of security sensors and security audits, need to be strictly enforced.

Continuous validation of AI models is critical to ensuring reliability. Kaspersky promotes performance monitoring and vulnerability reporting processes to detect problems caused by input data deviations or adversarial attacks. Appropriate partitioning of data sets and evaluation of the model’s decision-making logic are among the vital measures to reduce risks.

The guidance emphasizes the importance of protecting AI components against machine learning-specific attacks, such as adversarial inputs, data poisoning, and prompt injection attacks. Measures such as including adversarial examples in the training data set, using anomaly detection systems and applying distillation techniques increase the resistance of the models to manipulation.

Kaspersky emphasizes that AI libraries and frameworks need to be patched frequently to fix emerging vulnerabilities. Participation in bug bounty programs and lifecycle management for cloud-based AI models can also increase system resilience.

Adherence to global regulations (e.g. GDPR, EU AI Law) and best practices, as well as auditing AI systems for regulatory compliance, helps organizations comply with ethical and data privacy requirements, promoting trust and transparency.

The guidelines underscore the importance of responsible implementation of AI systems to avoid significant cybersecurity risks, making these systems a critical resource for businesses and governments alike.