Non-profit technology and R&D company MITRE has introduced a new mechanism that enables organizations to share intelligence on real-world AI-related incidents.
Shaped in collaboration with over 15 companies, the new AI Incident Sharing initiative aims to increase community knowledge of threats and defenses involving AI-enabled systems.
Launched as part of MITRE’s ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems) framework, the initiative allows trusted contributors to receive and share protected and anonymized data on incidents involving operational AI-enabled systems.
The initiative, MITRE says, will be a safe place for capturing and distributing sanitized and technically focused AI incident information, improving the collective awareness on threats, and enhancing the defense of AI-enabled systems.
The initiative builds on the existing incident sharing collaboration across the ATLAS community and expands the threat framework with new generative AI-focused attack techniques and case studies, as well as with new methods to mitigate assaults on AI-enabled systems.
Modeled after traditional intelligence sharing, the new initiative leverages STIX for data schema. Organizations can submit incident data through the public sharing site, after which they will be considered for membership in the trusted community of receivers.
The 15 organizations collaborating as part of the Secure AI project include AttackIQ, BlueRock, Booz Allen Hamilton, Cato Networks, Citigroup, Cloud Security Alliance, CrowdStrike, FS-ISAC, Fujitsu, HCA Healthcare, HiddenLayer, Intel, JPMorgan Chase Bank, Microsoft, Standard Chartered, and Verizon Business.
To ensure the knowledge base contains data on the latest demonstrated threats to AI in the wild, MITRE worked with Microsoft on ATLAS updates focused on generative AI in November 2023. In March 2023, they collaborated on the Arsenal plugin for emulating attacks on ML systems.
Advertisement. Scroll to continue reading.
“As public and private organizations of all sizes and sectors continue to incorporate AI into their systems, the ability to manage potential incidents is essential. Standardized and rapid information sharing about incidents will allow the entire community to improve the collective defense of such systems and mitigate external harms,” MITRE Labs VP Douglas Robbins said.
Related: MITRE Adds Mitigations to EMB3D Threat Model
Related: Security Firm Shows How Threat Actors Could Abuse Google’s Gemini AI Assistant
Related: Cybersecurity Public-Private Partnership: Where Do We Go Next?
Related: Are Security Appliances fit for Purpose in a Decentralized Workplace?