What Using Security to Regulate AI Chips Could Look Like

10 months ago 42
News Banner

Looking for an Interim or Fractional CTO to support your business?

Read more

Quality control lab for semiconductors, with chip under a magnifier shown big on a computer screen; background extended in Photoshop

Source: Science Photo Library via Alamy Stock Photo

Researchers from OpenAI, Cambridge University, Harvard University, and University of Toronto offered "exploratory" ideas on how to regulate AI chips and hardware, and how security policies could prevent the abuse of advanced AI.

The recommendations provide ways to measure and audit the development and use of advanced AI systems and the chips that power them. Policy enforcement recommendations include limiting the performance of systems and implementing security features that can remotely disable rogue chips.

"Training highly capable AI systems currently requires accumulating and orchestrating thousands of AI chips," the researchers wrote. "[I]f these systems are potentially dangerous, then limiting this accumulated computing power could serve to limit the production of potentially dangerous AI systems."

Governments have largely focused on software for AI policy, and the paper is a companion piece covering the hardware side of the debate, says Nathan Brookwood, principal analyst of Insight 64.

However, the industry will not welcome any security features that affect the performance of AI, he warns. Making AI safe through hardware "is a noble aspiration, but I can't see any one of those making it. The genie is out of the lamp and good luck getting it back in," he says.

Throttling Connections Between Clusters

One of the proposals the researchers suggest is a cap to limit the compute processing capacity available to AI models. The idea is to put security measures in place that can identify abuse of AI systems, and cutting off and limiting the use of chips.

Specifically, they suggest a targeted approach of limiting the bandwidth between memory and chip clusters. The easier alternative — to cut off access to chips — wasn’t ideal as it would affect overall AI performance, the researchers wrote.

The paper did not suggest ways to implement such security guardrails or how abuse of AI systems could be detected.

"Determining the optimal bandwidth limit for external communication is an area that merits further research," the researchers wrote.

Large-scale AI systems demand tremendous network bandwidth, and AI systems such as Microsoft's Eagle and Nvidia's Eos are among the top 10 fastest supercomputers in the world. Ways to limit network performance do exist for devices supporting the P4 programming language, which can analyze network traffic and reconfigure routers and switches.

But good luck asking chip makers to implement AI security mechanisms that could slow down chips and networks, Brookwood says.

"Arm, Intel, and AMD are all busy building the fastest, meanest chips they can build to be competitive. I don't know how you can slow down," he says.

Remote Possibilities Carry Some Risk

The researchers also suggested disabling chips remotely, which is something that Intel has built into its newest server chips. The On Demand feature is a subscription service that will allow Intel customers to turn on-chip features such as AI extensions on and off like heated seats in a Tesla.

The researchers also suggested an attestation scheme where chips allow only authorized parties to access AI systems via cryptographically signed digital certificates. Firmware could provide guidelines on authorized users and applications, which could be changed with updates.

While the researchers did not provide technical recommendations on how this would be done, the idea is similar to how confidential computing secures applications on chips by attesting authorized users. Intel and AMD have confidential computing on their chips, but it is still early days yet for the emerging technology.

There are also risks to remotely enforcing policies. "Remote enforcement mechanisms come with significant downsides, and may only be warranted if the expected harm from AI is extremely high," the researchers wrote.

Brookwood agreed.

"Even if you could, there are going to be bad guys who are going to pursue it. Putting artificial constraints for good guys is going to be ineffective," he said.

Read Entire Article