This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Competition & EU law insights

Keeping you up to date on Competition & EU law developments in Europe and beyond.

| 4 minute read

Joint Statement on Competition in Generative AI by global competition authorities

The European Commission (Commission), U.K. Competition and Markets Authority (CMA), U.S. Department of Justice (DoJ), and U.S. Federal Trade Commission (FTC) (together, the Competition Authorities), have issued a joint statement on the competition landscape surrounding generative artificial intelligence (AI) foundation models and AI products. The statement highlights the importance of working towards fair, open, and competitive markets when it comes to AI and its related markets. The statement outlines the potential benefits of AI, but also acknowledges the risks associated with it, and the need for constant vigilance to ensure fair, open, and competitive markets, essential for fostering innovation and economic growth.

Risks to competition in the AI ecosystem

The rapid evolution of generative AI is poised to be one of the most significant technological advancements in recent decades. The potential benefits include material improvements for citizens, boosts in innovation, and significant economic growth. The Competition Authorities noted that with these advancements come risks that need oversight to prevent any undermining of fair competition. The Competition Authorities are keen on using their available powers to address any risks before they become entrenched or irreversible harms. 

The Competition Authorities recognise that each jurisdiction has its own legal powers and jurisdictional context, and therefore, decisions will always remain sovereign and independent. They also noted in the joint statement that the risks associated with AI may not respect international boundaries. Therefore, the Competition Authorities are committed to sharing an understanding of the issues and using their respective powers to address any risks before they become entrenched or irreversible harms. The Competition Authorities have identified three key risks to competition in the AI ecosystem:

  1. Concentrated Control: Specialised chips, substantial computing power, large-scale data, and specialist expertise are crucial for developing AI foundation models. This concentrated control of key inputs required for the development of foundation AI models could allow a few companies to exploit existing or emerging bottlenecks across the AI stack, potentially limiting the scope of disruptive innovation at the expense of fair competition that benefits the public and the economy.
  2. Market Power: Large incumbent digital firms already enjoying strong advantages due to their market power may leverage their existing advantages to control AI-related markets, potentially preventing new competitors from entering.
  3. Collaborations and Investments: Widespread partnerships, financial investments, and other connections between firms related to the development of generative AI could be used to undermine or co-opt competitive threats, steering market outcomes in favour of dominant firms.

Principles for protecting competition

To address those perceived risks, the Competition Authorities have identified several common principles that they believe will generally serve to enable competition and foster innovation in the AI ecosystem:

  1. Fair Dealing: Ensuring that firms engage in practices that promote investment and innovation by third parties.
  2. Interoperability: Encouraging AI products and services to work together seamlessly, fostering competition and innovation. Competition Authorities will closely scrutinise any claims that interoperability requires sacrifices to privacy and security. 
  3. Choice: Businesses and consumers benefit from diverse options in the AI ecosystem and Competition Authorities will closely scrutinise mechanisms that could lock them into specific products or services.

Other Competition and Consumer Risks 

The Competition Authorities will monitor additional risks such as the potential for AI technologies to facilitate anti-competitive practices like price fixing or unfair price discrimination. 

Consumer protection will also be a priority, and the Competition Authorities will be vigilant of any consumer protection threats that may derive from the use and application of AI. The Competition Authorities are particularly concerned about firms that deceptively or unfairly use consumer data to train their models which can undermine people’s privacy, security, and autonomy. There is also a concern with firms that use business customers’ data to train their models which could result in exposing competitively sensitive information. 

Furthermore, the Competition Authorities consider that it is important that consumers are informed, where relevant, about when and how an AI application is employed in the products and services they purchase or use.

Key takeaways

The joint statement indicates that the competition authorities plan on “using their available powers to address any such risks before they become entrenched or irreversible harms." This suggests a more proactive stance to protect competition in the market and consumer interests. Moreover, the statement suggests a shift towards a more interventionist approach to protect competition and national security interests, indicating that companies should be prepared for tighter regulations and increased scrutiny in the future. 

This joint statement is a first step towards international cooperation among the Competition Authorities on AI. It is expected that collaboration, joint efforts, and guidance will follow. This will also mean that the Competition Authorities would likely share information about companies operating in the AI ecosystem that could potentially be of interest to possible investigations. 

To ensure compliance with competition laws and regulations, companies should take note of the risks outlined in the statement, including concentrated control of key inputs, entrenching or extending market power in AI-related markets, and arrangements involving key players that could amplify risks. Additionally, companies should be mindful of the potential risks that can arise where AI is deployed in markets, such as sharing competitively sensitive information, fixing prices, or colluding on other terms or business strategies. To avoid anticompetitive practices that could harm competition, companies should evaluate their own practices, carry out regular risk reviews and consider any appropriate mitigation strategies.

For further details, you can read the full joint statement here.

For more information or further guidance in this area, please contact Pauline Kuipers, Dr Saskia KingReshmi Rampersad, or Antonio Rodrigues.

VISIT OUR COMPETITION & EU HOMEPAGE

Tags

competition & eu law, ai, competition law, competition, antitrust, antitrust law, technology & communications, europe, uk, united states, artificial intelligence, generative ai