AI regulation isn’t a sexy topic, but it’s an important one if you’re a company or organization looking to harness AI in your workflows. Some of you will be looking for the best tools for the job while others will develop your own large-language models in-house. Either way, you’ll need to stay plugged in to what’s happening with AI regulations, and that includes what is happening globally.
Today, we’ll be discussing the recent EU legislation on AI moving forward in Parliament. It’s the first of its kind, and it provides some insight into where AI regs could end up both stateside and in other countries. Here’s the full text. Keep reading for the bullet points.
What the EU AI Legislation Says
Here are the key takeaways from the “Proposal for a Regulation laying down harmonized rules on artificial intelligence (Artificial Intelligence Act)” by the European Commission:
- The proposal aims to establish harmonized rules on artificial intelligence (AI) to ensure that AI technologies are safe, respect fundamental rights, and align with EU values. It seeks to balance the economic and societal benefits of AI with the potential risks and negative consequences.
- The proposal is a response to the political commitment by President von der Leyen, who announced that the Commission would put forward legislation for a coordinated European approach on the human and ethical implications of AI.
- The proposal identifies certain AI practices as harmful and proposes specific restrictions and safeguards, particularly in relation to the use of remote biometric identification systems for law enforcement purposes.
- The proposal defines “high-risk” AI systems that pose significant risks to the health and safety or fundamental rights of persons. These systems will have to comply with a set of mandatory requirements for trustworthy AI and follow conformity assessment procedures before they can be placed on the Union market.
- The proposal also sets out transparency obligations for certain AI systems, such as chatbots or ‘deep fakes’.
- The proposed rules will be enforced through a governance system at Member States level, with the establishment of a European Artificial Intelligence Board. Additional measures are proposed to support innovation, including AI regulatory sandboxes and measures to reduce the regulatory burden on Small and Medium-Sized Enterprises (SMEs) and start-ups.
- The proposal is consistent with existing Union legislation applicable to sectors where high-risk AI systems are already used or likely to be used in the near future, including data protection, consumer protection, non-discrimination, and gender equality.
- The proposal is part of a wider package of measures that address problems posed by the development and use of AI, and is coherent with the Commission’s overall digital strategy. It also strengthens the Union’s role in shaping global norms and standards and promoting trustworthy AI.
What It Means for US Stakeholders
- Harmonised rules on AI: The proposal aims to establish a uniform set of rules for AI technologies. This means that U.S. companies developing AI technologies for the European market will need to adhere to these rules. Understanding these regulations will be crucial for market access and compliance.
- Coordinated European approach on AI’s ethical implications: The EU’s focus on the ethical implications of AI indicates a global trend towards more regulation in this area. U.S. developers should be aware of these discussions as similar regulations could be adopted in the U.S. or other markets.
- Identification of harmful AI practices: The proposal identifies certain AI practices as harmful, which could influence global standards or expectations for AI development. U.S. developers will need to ensure their AI technologies do not engage in these practices to maintain a positive reputation and avoid potential legal issues.
- Definition and regulation of “high-risk” AI systems: The EU’s approach to defining and regulating “high-risk” AI systems could set a precedent for other jurisdictions. U.S. developers working on similar systems should monitor these developments closely as they could impact their operations or require changes to their products.
- Transparency obligations: The EU’s emphasis on transparency, particularly for systems like chatbots or ‘deep fakes’, could influence user expectations or regulatory standards in other markets, including the U.S. Developers will need to consider how to incorporate transparency into their AI systems.
- Enforcement and support for innovation: The EU’s approach to enforcement and its support for innovation through measures like AI regulatory sandboxes could provide opportunities for U.S. companies to test and develop their AI technologies in the EU. Understanding these mechanisms will be important for taking advantage of these opportunities.
- Consistency with existing legislation: The proposal’s consistency with existing EU legislation, including data protection, consumer protection, non-discrimination, and gender equality laws, means that U.S. developers will need to consider a wide range of regulatory requirements when developing AI technologies for the EU market.
- Part of a wider package of measures: The proposal is part of a wider package of measures addressing the development and use of AI. U.S. developers should monitor these developments as they could influence global trends and standards in AI regulation.
- Shaping global norms and standards: The EU’s efforts to shape global norms and standards for AI could influence the regulatory environment in the U.S. and other markets. U.S. developers should stay informed about these developments to anticipate potential changes in the regulatory landscape.
AI Development and Deployment Is Rapidly Evolving
The European Union’s proposed regulations underscore the global trend towards ensuring that AI technologies are safe, transparent, and respect fundamental rights. These changes will have far-reaching implications for AI developers worldwide, including those in the United States.
Understanding and navigating these new rules will be crucial for businesses seeking to leverage AI technologies. Whether you’re developing high-risk AI systems, working on innovative AI solutions, or simply looking to integrate AI into your existing workflows, staying ahead of these regulatory shifts will be key to your success.
I’m eager to help you with these complexities in any way that I can through my AI consulting business. I want to work with you and help you align your AI tools with your daily workflows while ensuring compliance with emerging regulations. Together, we can be ready for what’s to come while setting your organization apart as one that demonstrates a strong commitment to safety, transparency, and respect for fundamental rights.
Don’t let regulatory changes catch you off guard. Reach out to me today, and let’s explore how we can work together to seamlessly integrate AI into your operations and create a competitive edge for your business in this new era of AI governance.