CISA, NCSC Offer a Roadmap, Not Rules, in New Secure AI Guidelines

11 months ago 63
News Banner

Looking for an Interim or Fractional CTO to support your business?

Read more

On Sunday, the US Cybersecurity and Infrastructure Security Agency (CISA) and the UK's National Cyber Security Centre released new Guidelines for Secure AI System Development.

The Guidelines — co-sealed by 23 domestic and international cybersecurity organizations — build on ongoing White House efforts to mitigate AI risk and the secure-by-design philosophy. They provide an outline for building security into AI systems, but stop short of instituting any rules or regulations on the industry, in contrast to the European Union's recent AI Act. AI companies thus now have a guidebook to follow, or disregard, at their discretion.

"The industry is finding a lot of innovative ways to adopt AI for good, but also in malicious ways," says Chris Hughes, chief security advisor at Endor Labs and cyber innovation fellow at CISA. "This is a recognition that AI is here to stay, and we've got to try to get ahead of it, to avoid bolting security on later versus building it in now."

New Guidelines for AI in US, UK

CISA and NCSC broke down their new guidelines into four primary sections.

The first section, on secure design, covers potential risks and threat modeling, as well as the potential trade-offs to consider in this initial design phase.

Secure development, section two, covers the AI development lifecycle, including concerns with supply chain security, documentation, and asset and technical debt management.

Next, the guidelines advise organizations how to deploy securely — avoiding compromise, implementing incident management, and so on.

The last section covers all things related to the operation and maintenance of AI-enabled technologies post-deployment, including monitoring, logging, updating, and information sharing.

"It's not looking to recreate the wheel," Hughes explains. Instead, "what jumped out to me is the continued dialogue CISA has been having around secure-by-design systems and software. It's continuing the trend, and putting the onus on software suppliers and vendors — something that was emphasized not just by CISA, but also the NCSC."

Regulation: A Lighter or Heavier Touch?

In June, the EU overwhelmingly passed the so-called "AI Act," defining new laws aimed at trust and accountability for the AI industry.

By contrast, CISA and NCSC have merely provided recommendations for AI developers and the companies that rely on them.

"This is just a guideline, just a recommendation. It uses the word 'should' I think 51 times," Hughes emphasizes.

For this reason, he admits, they're unlikely to have nearly as much impact as real regulation. "As we know, security does have a cost to it — it can slow things down sometimes, or introduce friction. And when you have incentives like speed to market, and revenue, and things like that on the line, people tend to not do what they're not required to do."

But whether that's a bad or good thing is up for debate. "If you come at it from the perspective of security and privacy for consumers and citizens, there's an argument that regulation is better. It's forcing security, caution, governance, and safeguards for privacy and security. But at the same time, there's no denying that compliance and regulatory measures can be cumbersome and bureaucratic, and can kind of box out younger, disruptive companies, having an impact on innovation," Hughes adds. "I hope that some software suppliers will take this and use it as a competitive differentiator."

Read Entire Article