The European Union Artificial Intelligence (AI) Act: Managing Security and Compliance Risk at the Technological Frontier

2 weeks ago 8
News Banner

Looking for an Interim or Fractional CTO to support your business?

Read more

author headshotOriginally published by Scrut Automation.

Written by Amrita Agnihotri.

A growing wave of AI-related legislation and regulation is building, with the most significant example being the European Union’s (EU) Artificial Intelligence (AI) Act. In March 2024, European leaders passed this sweeping legislation into law.

It will clearly have huge impacts on the way business is done, both in the EU and globally. In this post we’ll go look at the implications for organizations deploying AI to drive business value.

Background

With initial drafts starting in 2018, the AI Act was formally proposed by the European Commission in early 2021. But it has continued to evolve over the subsequent 2.5 years. The explosion in the use of AI tools following the launch of ChatGPT in late 2022 provided special urgency to EU rulemakers.

 Four Categories

While each of the EU member states will need to develop its own regulatory infrastructure, the AI Act also creates a European AI Office within the European Commission. This office will help coordinate between the various national governments as well as supervise and regulate “general purpose” AI models, such as Large Language Models (LLMs) trained on a diverse array of information.

Although it’s not clear from official communications, a leaked draft of the Act suggested it would not apply to open-source models. The open-source community has aggressively criticized many EU regulatory efforts, such as the AI Act but also the proposed Cyber Resilience Act (CRA).

Enforcement of the Act will begin within six months of passage when unacceptably risky AI systems will be legally banned. After 12 months, rules for general purpose AI will come into force. And at 24 months, the entire AI Act will be in force.

The forecasted fines for non-compliance are steep:

  • €35 million or 7% (whichever is greater) of global annual revenue for prohibited AI application use.
  • €7.5 million or 1.5% of revenue for supplying incorrect information.
  • €15 million or 3% of revenue for violations of other obligations.

The first category of potential fines is meant to strongly deter organizations from deploying certain types of AI applications, which we’ll dive into next.

Banned applications of AI

The EU Commission drew a clear line in the sand by completely outlawing certain types of AI applications and development. Banned applications include:

  • Systems that manipulate human behavior to circumvent free will. While the EU press release gives the example of toys that use voice assistance to encourage dangerous behavior in minors, it isn’t clear how this rule will apply to more ambiguous situations. Basically, every type of advertising attempts to redirect human behavior, and it’s hard to see how advertising will not use AI in the future. So, this is definitely something that will need clarification.
  • Systems that allow ‘social scoring’ by governments or companies. This provision is a clear allusion to fears that China is planning to build a system that integrates financial, social media, and criminal record monitoring to evaluate its entire population. The implementation details will be important here because many companies use things like net promoter or customer sentiment scores to track reputation and other business risks.
  • Use of emotion recognition systems in the workplace. This is another area that will need substantial elaboration. While it is understandable that the EU might want to prohibit certain types of oppressive monitoring of employees, where it will draw the line is important. There is already a range of AI-powered communications tools that use emotions to predict things like churn risk, for example.
  • Certain applications of predictive policing. While this is not a blanket ban as some had hoped, it seems certain crime-prediction methods will be outlawed.
  • Real-time remote biometric identification for law enforcement purposes in public (with some exceptions for national security). This provision appears to ban police from deploying facial recognition or other sensor systems in a general way to identify criminals. The narrowness of the exceptions will be vital in determining any potential issues this provision proposes.

High-risk systems and their required controls

Aside from banned systems, there is another category of permitted but high-risk use cases. These include:

  • Critical infrastructure applications, e.g., water, gas, and electricity
  • Educational institution admission
  • Biometric identification
  • Justice administration
  • Sentiment analysis
  • Medical devices
  • Border control

The AI Act will require that such systems comply with a range of requirements to mitigate the risk, including those related to:

  • High-quality data sets
  • Logging and auditing
  • Human oversight
  • High accuracy
  • Cybersecurity

Conclusion

As we have seen from previous EU regulatory efforts, especially the General Data Protection Regulation (GDPR), the AI Act's impacts are likely to be felt far and wide. While it may take some time for regulators to catch up with the pace of technology, they inevitably do so.

Even five years after it came into effect, the GDPR is just building up momentum in terms of enforcement action, resulting in some shocking fines from major companies.

This type of “regulation through enforcement” is unfortunate but likely unavoidable as companies test the limits of new rules and governments react aggressively. The best approach is to follow a balanced course of action that allows for taking advantage of AI’s many benefits while avoiding or mitigating its most significant risks.

Read Entire Article