EU Takes Groundbreaking Step to Regulate Artificial Intelligence, but Critics Cite Concerns over Tech Monopolies and Loopholes

Brussels, Belgium – The European Union (EU) has taken a significant step by introducing groundbreaking legislation to regulate artificial intelligence (AI). While some argue that the measures are insufficient, others warn that they could impose undue restrictions on companies operating in the AI sector.

In response to the rapid advancements in AI, EU policymakers have been proactive in issuing rules and guidance for tech companies. This week, the European Parliament unanimously approved the Artificial Intelligence Act, which adopts a risk-based approach to ensure compliance with the law before AI products are made available to the public.

Additionally, the European Commission has called upon major tech players such as Bing, Facebook, Google Search, Instagram, Snapchat, TikTok, YouTube, and X to provide detailed explanations of how they are addressing the risks associated with generative AI. The EU is particularly concerned about AI hallucinations, the viral spread of deepfakes, and the potential manipulation of AI leading up to elections.

Despite being commended for being the first jurisdiction to regulate AI risks globally, the EU’s legislation has drawn criticism. Max von Thun, Europe director of the Open Markets Institute, highlights several flaws in the final agreement. He points out that there are significant loopholes for public authorities and insufficient regulation of the largest foundation models, which pose the greatest harm.

Von Thun’s primary concern is the dominance of tech monopolies and their potential abuse of AI technologies. He asserts that the legislation fails to address the consolidation of power held by a few influential tech companies. French startup Mistral AI’s partnership with Microsoft recently shed light on the issue of AI monopolies, sparking surprise among those who believed that the AI Act would offer concessions to open source companies like Mistral.

While some express reservations about the limitations imposed by the legislation, others view it as a positive development in terms of clarity and guidance. Alex Combessie, co-founder and CEO of French open source AI company Giskard, hails the EU Parliament’s adoption of the AI Act as a historic moment. He emphasizes the importance of effectively implementing the checks and balances imposed by the legislation to ensure responsible AI usage.

The legislation classifies AI products based on their level of risk, with stricter regulations applied to those using more powerful foundation models. However, Katharina Zügel, policy manager at the Forum on Information and Democracy, argues that AI systems employed in the information space should be classified as high-risk due to their impact on fundamental rights. Zügel calls for AI to be treated as a public good rather than solely driven by private companies.

In response, Julie Linn Teigland, EY’s Europe, Middle East, India, and Africa (EMEIA) Managing Partner, emphasizes the importance of balancing private sector dynamism and regulation. Teigland believes that harnessing the potential of the private sector is crucial for driving AI innovation and making Europe more competitive. However, she also stresses that businesses must prepare for the law’s implementation by understanding their legal responsibilities.

Start-ups and small and medium-sized enterprises (SMEs) foresee additional challenges resulting from the legislation. Marianne Tordeux Bitker, public affairs chief at France Digitale, acknowledges the positive aspects of the AI Act’s transparency and ethical considerations but expresses concerns about the substantial obligations it imposes. She fears that the additional regulatory hurdles may ultimately benefit American and Chinese competition, potentially hindering the emergence of European AI champions.

While the AI Act represents a significant milestone, the effective implementation and enforcement of the legislation remain significant challenges. Risto Uuk, EU research lead at the nonprofit Future of Life Institute, emphasizes the need for complementary legislation such as the AI Liability Directive and the EU AI Office to support liability claims for AI-enabled products and streamline rule enforcement. Uuk stresses the importance of adequately resourcing the AI Office and engaging civil society in the development of codes of practices for general-purpose AI to ensure the law’s efficacy.