NSA’s AISC releases guidance on securing AI systems
The guidance emphasises implementing mitigations for known vulnerabilities in AI systems and provides methodologies and controls to protect against malicious activities.
The National Security Agency’s Artificial Intelligence Security Center (NSA AISC) has introduced new guidelines to bolster cybersecurity in the era of AI integration into daily operations. The initiative, developed with key agencies like CISA, FBI, and others, focuses on safeguarding AI systems against potential threats.
The recently released Cybersecurity Information Sheet, ‘Deploying AI Systems Securely,’ outlines essential best practices for organisations deploying externally developed AI systems. The guidelines emphasise three primary objectives: confidentiality, integrity, and availability. Confidentiality ensures sensitive information remains protected; integrity maintains accuracy and reliability, and availability guarantees authorised access as needed.
The guidance stresses the importance of mitigating known vulnerabilities in AI systems to preemptively address security risks. Agencies advocate for implementing methodologies and controls to detect and respond to malicious activities targeting AI systems, their data, and associated services.
The recommendations include ongoing compromise assessments, IT deployment environment hardening, and thorough validation of AI systems before deployment. Strict access controls and robust monitoring tools, such as user behaviour analytics, are advised to identify and mitigate insider threats and other malicious activities.
Organisations deploying AI systems are urged to review and implement the prescribed practices to enhance the security posture of their AI deployments. This proactive approach ensures that AI systems remain resilient against evolving cybersecurity threats in the rapidly advancing AI landscape.