Innovating in line with the European Union’s AI Act 

1 month ago 15
News Banner

Looking for an Interim or Fractional CTO to support your business?

Read more

As our Microsoft AI Tour reached Brussels, Paris, and Berlin toward the extremity of past year, we met with European organizations that were energized by the possibilities of our latest AI technologies and engaged successful deployment projects. They were besides alert to the information that 2025 is the twelvemonth that cardinal obligations nether the European Union’s AI Act travel into effect, opening a caller section successful integer regularisation arsenic the world’s first, broad AI instrumentality becomes a reality.  

At Microsoft, we are acceptable to assistance our customers bash 2 things astatine once: innovate with AI and comply with the EU AI Act. We are gathering our products and services to comply with our obligations nether the EU AI Act and moving with our customers to assistance them deploy and usage the exertion compliantly. We are besides engaged with European policymakers to enactment the improvement of businesslike and effectual implementation practices nether the EU AI Act that are aligned with emerging planetary norms.  

Below, we spell into much item connected these efforts. Since the dates for compliance with the EU AI Act are staggered and cardinal implementation details are not yet finalized, we volition beryllium publishing accusation and tools connected an ongoing basis. You tin consult our EU AI Act documentation connected the Microsoft Trust Center to enactment up to date. 

Building Microsoft products and services that comply with the EU AI Act 

Organizations astir the satellite usage Microsoft products and services for innovative AI solutions that empower them to execute more. For these customers, peculiarly those operating globally and crossed antithetic jurisdictions, regulatory compliance is of paramount importance. This is why, successful each lawsuit agreement, Microsoft has committed to comply with each laws and regulations applicable to Microsoft. This includes the EU AI Act. It is besides wherefore we made aboriginal decisions to physique and proceed to put successful our AI governance program. 

As outlined successful our inaugural Transparency Report, we person adopted a hazard absorption attack that spans the full AI improvement lifecycle. We usage practices similar interaction assessments and red-teaming to assistance america place imaginable risks and guarantee that teams gathering the highest-risk models and systems person further oversight and enactment done governance processes, similar our Sensitive Uses program. After mapping risks, we usage systematic measurement to measure the prevalence and severity of risks against defined metrics. We negociate risks by implementing mitigations similar the classifiers that signifier portion of Azure AI Content Safety and ensuring ongoing monitoring and incidental response.  

Our model for guiding engineering teams gathering Microsoft AI solutions—the Responsible AI Standard—was drafted with an aboriginal mentation of the EU AI Act successful mind.  

Building connected these foundational components of our program, we person devoted important resources to implementing the EU AI Act crossed Microsoft. Cross-functional moving groups combining AI governance, engineering, legal, and nationalist argumentation experts person been moving for months to place whether and however our interior standards and practices should beryllium updated to bespeak the last substance of the EU AI Act arsenic good arsenic aboriginal indications of implementation details. They person besides been identifying immoderate further engineering enactment needed to guarantee readiness.  

For example, the EU AI Act’s prohibited practices provisions are among the archetypal provisions to travel into effect successful February 2025. Ahead of the European Commission’s recently established AI Office providing further guidance, we person taken a proactive, layered attack to compliance. This includes: 

  • Conducting a thorough reappraisal of Microsoft-owned systems already connected the marketplace to place immoderate places wherever we mightiness request to set our approach, including by updating documentation oregon implementing method mitigations.To bash this, we developed a bid of questions designed to elicit whether an AI strategy could implicate a prohibited signifier and dispatched this survey to our engineering teams via our cardinal tooling. Relevant experts reviewed the responses and followed up with teams straight wherever further clarity oregon further steps were necessary. These screening questions stay successful our cardinal liable AI workflow instrumentality connected an ongoing basis, truthful that teams moving connected caller AI systems reply them and prosecute the reappraisal workflow arsenic needed.  
  • Creating caller restricted uses in our interior institution argumentation to guarantee Microsoft does not plan oregon deploy AI systems for uses prohibited by the EU AI Act.We are besides processing circumstantial selling and income guidance to guarantee that our general-purpose AI technologies are not marketed oregon sold for uses that could implicate the EU AI Act’s prohibited practices.  
  • Updating our contracts, including our Generative AI Code of Conduct, truthful that our customers intelligibly recognize they cannot prosecute successful immoderate prohibited practices.​ For example, the Generative AI Code of Conduct present has an explicit prohibition connected the usage of the services for societal scoring. 

We were besides among the archetypal organizations to motion up to the 3 halfway commitments successful the AI Pact, a acceptable of voluntary pledges developed by the AI Office to enactment regulatory readiness up of immoderate of the upcoming compliance deadlines for the EU AI Act. In summation to our regular bushed of publishing yearly Responsible AI Transparency Reports, you tin find an overview of our attack to the EU AI Act and a much elaborate summary of however we are implementing the prohibited practices provisions connected the Microsoft Trust Center. 

Working with customers to assistance them deploy and usage Microsoft products and services successful compliance with the EU AI Act 

One of the halfway concepts of the EU AI Act is that obligations request to beryllium allocated crossed the AI proviso chain. This means that an upstream regulated actor, similar Microsoft successful its capableness arsenic a supplier of AI tools, services, and components, indispensable enactment downstream regulated actors, similar our endeavor customers, erstwhile they integrate a Microsoft instrumentality into a high-risk AI system. We clasp this conception of shared work and purpose to enactment our customers with their AI improvement and deployment activities by sharing our knowledge, providing documentation, and offering tooling. This each ladders up to the AI Customer Commitments that we made successful June of past twelvemonth to enactment our customers connected their liable AI journeys. 

We volition proceed to people documentation and resources related to the EU AI Act connected the Microsoft Trust Center to supply updates and code lawsuit questions. Our Responsible AI Resources site is besides a affluent root of tools, practices, templates, and accusation that we judge volition assistance galore of our customers found the foundations of bully governance to enactment EU AI Act compliance.  

On the documentation front, the 33 Transparency Notes that we person published since 2019 supply indispensable accusation astir the capabilities and limitations of our AI tools, components, and services that our customers trust connected arsenic downstream deployers of Microsoft AI level services. We person besides published documentation for our AI systems, specified arsenic answers to often asked questions. Our Transparency Note for the Azure OpenAI Service, an AI level service, and FAQ for Copilot, an AI system, are examples of our approach. 

We expect that respective of the secondary regulatory efforts nether the EU AI Act volition supply further guidance connected model- and system-level documentation. These norms for documentation and transparency are inactive maturing and would payment from further explanation accordant with efforts similar the Reporting Framework for the Hiroshima AI Process International Code of Conduct for Organizations Developing Advanced AI Systems. Microsoft has been pleased to lend to this Reporting Framework done a process convened by the OECD and looks guardant to its forthcoming nationalist release. 

Finally, due to the fact that tooling is indispensable to execute accordant and businesslike compliance, we marque disposable to our customers versions of the tools that we usage for our ain interior purposes. These tools see Microsoft Purview Compliance Manager, which helps customers recognize and instrumentality steps to amended compliance capabilities crossed galore regulatory domains, including the EU AI Act; Azure AI Content Safety to assistance mitigate content-based harms; Azure AI Foundry to assistance with evaluations of generative AI applications; and Python Risk Identification Tool oregon PyRIT, an unfastened innovation model that our autarkic AI Red Team uses to assistance place imaginable harms associated with our highest-risk AI models and systems. 

Helping to make efficient, effective, and interoperable implementation practices 

A unsocial diagnostic of the EU AI Act is that determination are much than 60 secondary regulatory efforts that volition person a worldly interaction connected defining implementation expectations and directing organizational compliance. Since galore of these efforts are successful advancement oregon yet to get underway, we are successful a cardinal model of accidental to assistance found implementation practices that are efficient, effective, and aligned with emerging planetary norms. 

Microsoft is engaged with the cardinal EU regulator, the AI Office, and different applicable authorities successful EU Member States to stock insights from our AI development, governance, and compliance experience, question clarity connected unfastened questions, and advocator for applicable outcomes. We are besides participating successful the improvement of the Code of Practice for general-purpose AI exemplary providers, and we stay longstanding contributors to the method standards being developed by European Standards organizations, specified arsenic CEN and CENELEC, to code high-risk AI strategy requirements successful the EU AI Act. 

Our customers besides person a cardinal relation to play successful these implementation efforts. By engaging with policymakers and manufacture groups to recognize the evolving requirements and person a accidental connected them, our customers person the accidental to lend their invaluable insights and assistance signifier implementation practices that amended bespeak their circumstances and needs, recognizing the wide scope of organizations successful Europe that are energized by the accidental to innovate and turn with AI. In the coming months, a cardinal question to beryllium resolved is erstwhile organizations that substantially fine-tune AI models go downstream providers owed to comply with general-purpose AI exemplary obligations successful August. 

Going forward 

Microsoft volition proceed to marque important product, tooling, and governance investments to assistance our customers innovate with AI successful enactment with caller laws similar the EU AI Act. Implementation practices that are efficient, effective, and interoperable internationally are going to beryllium cardinal to supporting utile and trustworthy innovation connected a planetary scale, truthful we volition proceed to thin into regulatory processes successful Europe and astir the world. We are excited to spot the projects that animated our Microsoft AI Tour events successful Brussels, Paris, and Berlin amended people’s lives and gain their trust, and we invited feedback connected however we tin proceed to enactment our customers successful their efforts to comply with caller laws similar the EU AI Act. 

Tags: , , , , ,

Read Entire Article