In recent years, artificial intelligence (“AI”) has emerged as a transformative technology with vast potential to reshape various industries. However, concerns about its potential anti-competitive effects have prompted regulatory action. As a part of the European Union's Digital Strategy, which aims to support Europe's digital transformation, the draft Artificial Intelligence Act (“Draft AI Act”) was proposed by the European Commission (“Commission”) in 2021.
This legislation will be unique, providing comprehensive regulations specifically for AI. This is in line with the Commission’s European AI Strategy from 2018 which aims at making the EU a world-class hub for AI and ensuring that AI is human-centric and trustworthy, as well as President von der Leyen's commitment to propose legislation for a unified approach in Europe regarding the ethical and societal impacts of AI.
While the draft AI Act will have consequences for competition law, AI itself will reshape the competitive dynamics in markets, probably leading to new forms of anti-competitive conducts such as algorithmic collusion. While traditional forms of collusion involve direct communication between competitors, algorithmic collusion leverages sophisticated algorithms to facilitate anti-competitive behaviour. This blog will set out the current state of play with respect to the AI Act and the consequences of AI on competition law.
The proposed AI Act
The EU believes that fostering excellence in AI will strengthen Europe’s potential to compete globally. Following from the European AI strategy, the EU aims to achieve this by:
- enabling the development and uptake of AI in the EU (e.g. the Commission) plans to invest 1 billion EUR per year in AI);
- making the EU the place where AI thrives from the lab to the market;
- ensuring that AI works for people and is a force for good in society; and
- building strategic leadership in high-impact sectors.
The Commission proposed the draft AI Act in 2021. Just a month ago, in December 2023, the EU institutions reached a provisional agreement on the Draft AI Act. The Draft AI Act has the objective of ensuring the proper functioning of the EU single market by creating the conditions for the development and use of trustworthy AI systems in the EU. This would be achieved by:
- Ensuring that AI systems placed on the EU market are safe and respect existing EU law;
- Ensuring legal certainty to facilitate investment and innovation in AI;
- Enhancing governance and effective enforcement of EU law on fundamental rights and safety requirements applicable to AI systems; and
- Facilitating the development of a single market for lawful, safe and trustworthy AI applications and prevent market fragmentation.
The Draft AI Act will be in the form of a regulation, which will have direct effects in all EU Member States, and introduces several rules. The obligation to comply with those rules will differ depending on the risk level posed by the AI system: low and minimal risk (no obligations), high-risk (AI systems for critical infrastructure / education), unacceptable risk (such as biometric categorisation systems) and specific transparency risk.
The Draft AI Act establishes specific regulations for General-purpose AI systems (“GPAI”) to ensure transparency throughout their value chain. For highly powerful GPAI that could potentially pose significant risks, there will be additional mandatory obligations regarding risk management, monitoring of serious incidents, model evaluation, and adversarial testing. These obligations will be implemented through codes of practice developed in collaboration with industry, the scientific community, civil society, and other stakeholders, along with the Commission. Certain systems, like biometric categorisation systems using sensitive characteristics, will be considered as unacceptable risks.
In terms of governance, the Draft AI Act proposes a new body with vested powers: the European AI Office, a body within the European Commission. This will oversee the most advanced AI models, contribute to fostering standards and testing practices, and enforce the rules in all member states. The AI Board, which would comprise member states’ representatives, will remain as a coordination platform and an advisory body to the Commission.
On the national level, the Draft AI Act requires each member state to designate a national supervisory authority that will oversee the implementation of the new AI rules within their respective countries (Article 59 of the Draft AI Act). Article 64 of the Draft AI Act gives the national supervisory authorities the power to access data and documentation in the context of their activities.
As mentioned, the agreement is a political one as the EU Institutions have only reached a provisional agreement on the proposal on the Draft AI Act; meetings to finalise the details of the Draft AI Act will continue. The recent provisional agreement on the Draft AI Act has not been published yet, but this will happen soon. The final text of the AI Act will be adopted by the European Parliament and the Council and published in the Official Journal of the European Union. This will probably happen in the first quarter of 2024. Compliance with the AI Act will be crucial for companies, as non-compliance can result in fines up to 7% of global annual turnover, and non-complying systems may be barred from the EU market after a grace period. For more information, please see the latest article at Bird & Bird Insights.
AI Act & EU competition
The national supervisory authorities will have access to data of companies to conduct the necessary checks on compliance with the rules of the AI Act. They are obliged to provide this data to competition authorities when it may be of interest to them, without any requirement of any suspicion of anti-competitive behaviour. According to Article 63 of the Draft AI Act, national supervisory authorities are required “without delay” to report to both the Commission and national competition authorities “any information identified in the course of market surveillance activities that may be of interest for the application of Union law on competition rules.”
Given the growing reliance of companies on algorithms to make strategic decisions, it is expected that the national supervisory authorities will hold a significant amount of information that would be of interest to national competition authorities and the Commission when enforcing EU Competition rules. Of course, it will be seen whether the updated Draft AI Act would be different on this topic.
It should be remembered that national competition authorities have the power to request information from companies through a formal request when there is a suspicion of a violation of the competition rules. The Digital Markets Act (“DMA”) follows a similar approach, granting the Commission the power to conduct inspections and request information when it suspects a violation of the obligations as set in the DMA (Article 26(1) DMA).
The Draft AI Act appears to have a contrasting approach as the threshold for sharing information collected by national supervisory authorities in the course of their activities with national competition authorities seems to be lower. The obligation to share information applies regardless of whether violations of EU competition rules are suspected or alleged. The Draft AI Act will therefore have significant consequences for EU competition law.
As many companies make more use of AI for making their strategic decisions, it is likely that the competitive dynamics in markets will be changed. The use of AI in markets introduces potential risks of anti-competitive behaviour.
First, the viability of data, and market predictions can increase market transparency, which could cause companies to act less independently on the market but also make it easier for companies to communicate and detect any deviations from anti-competitive agreements. This is particularly relevant to online marketplaces where pricing and transaction data are readily available. Secondly, AI can enable price adjustments and interactions with competitors on e.g. platforms, which can facilitate the implementation and monitoring of anti-competitive behaviour. In both scenarios, no direct communication between competitors needs to take place in order to have anti-competitive effects on competition. Lastly, the use of AI by companies that are dominant in a market can lead to the exclusion of competitors from the market. These identified risks will likely occur in markets that are already prone to collusion due to factors such as an oligopolistic market, product homogeneity and the presence of a small number of companies.
Remember, EU competition law basically contains two major prohibitions:
- Any agreements and concerted practices that restrict, distort or prevent competition (Article 101 (1) TFEU; and
- Abuse of a dominance by an undertaking that holds a dominant position on a market (Article 102 TFEU).
We will discuss some aspects on the use of AI in relation to both of these basic rules.
Anti-competitive conducts under 101 TFEU
In its note ‘Algorithmic competition’ of 14 June 2023 for the OECD, the Commission defines an ‘algorithm’ as an exact sequence of instructions that generate an output in a clearly defined format from a given digital input. Algorithms can include simple set of rules as well as very advanced machine learning or artificial intelligence systems. Algorithm can affect competition by ‘algorithmic collusion’. The Commission defines ‘algorithmic collusion’ as any form of anti-competitive agreement or coordination among competing firms that is facilitated or implemented through means of automated systems.
In economic science a distinction is made between ‘explicit collusion’ and ‘tacit collusion’. Explicit collusion refers to anti-competitive behaviours that are the result of agreements or concerted practices between undertakings. Tacit collusion, however, refers to all sorts of forms of coordination based on parallel behaviours of competing undertakings that which is not the result of an agreement or concerted practice. This is where potential competition law issues may arise, as competition law does not prohibit parallel behaviours of competing undertakings absent an agreement and/pr concerted practice that restrict competition.
The Commission has recognised that algorithms could lead to collusion, but they can also be used to reinforce and monitor the collusion (i.e., in the implementation phase). The Commission identified three specific situations involving different families of algorithms:
- Algorithmic facilitation of traditional collusion: this involves the situation where algorithms are used to support or implement a pre-existing collusion.
- Algorithmic alignment via a third party: this involves the situation where a third party provides the same algorithm or coordinated algorithms to different competitors resulting in an alignment between the parties with respect to competitive parameters such as pricing, output, customers etc. The difference between algorithmic alignment via a third party and the algorithmic facilitation of traditional collusion is that with algorithmic facilitation, the use of the same third-party software provider is not the consequence of an anticompetitive conduct.
- Algorithmic alignment: the parallel use by competitors of distinct (pricing) self-learning/deep-learning algorithms that via their automatic, reciprocal interaction can lead to the alignment of the (pricing) behaviour, without any direct contact between the competitors. This will be difficult to prove.
Until now it is clear that the use of software monitoring tools and digital platforms can have anti-competitive effects. The Commission imposed fines totalling over EUR 111 million on four consumer electronics manufacturers as they restricted the ability of their online retailers to set their own retail prices for the sale of consumer electronic goods. The manufactures used monitoring tools to track resale price setting in the distribution network and to intervene if there would be a price decrease.
Following from the CJEU judgment in Eturas, the use of digital platforms can lead to coordination of prices. Eturas was an online travel booking system for package tours and was used by travel agents. E-TURAS informed the travel agents in 2009 that discounts for online bookings would be capped at 3% and invited them to vote for this idea. On the same day, the travel agents received a system notification of the reduction of the discount and technical modifications that were made to apply this discount cap. Although several travel agencies submitted that they did not receive or read the notice sent by Eturas, the CJEU found that Eturas and the travel agencies had taken part in concerted practices infringing article 101(1) TFEU as Eturas and the travel agencies did not publicly distanced themselves from the anti-competitive practice. In that case, it can be presumed that Eturas and the travel agencies had taken part in concerted practices.
Abuse of dominance under Article 102 TFEU
Companies that have significant market power may use AI to exclude competitors. This can be done by programming the algorithm to give preferential treatment to their own products and services. The European Commission has already taken enforcement actions against such behaviour, resulting in:
- substantial fines being imposed on Google (Google Shopping) of EUR 2.42 billion. The Commission held that Google illegally favoured its own comparison shopping service by displaying it more prominently in its search results than other comparison shopping services. The EU General Court upheld the decision of the Commission and confirmed that self-preferencing by a dominant firm can be a stand-alone abuse in certain circumstances under Article 102 TFEU; and
- accepting commitments by Amazon regarding Amazon's usage of non-public data relating to the sellers' activities on Amazon marketplace for its own retail business and to treat all sellers equally on the Amazon marketplace when ranking offers.
Just this month, the Commission has launched two calls for contributions on competition in virtual worlds and generative AI and sent requests for information to several large digital players. Interested parties are invited to submit their responses to the calls for contributions by 11 March 2024.
All interested stakeholders can share their experience and provide feedback on the level of competition in the context of virtual worlds and generative AI, and their insights on how competition law can help ensure that these new markets remain competitive. The Commissioner for Competition, Margrethe Vestager, stated that virtual worlds and generative AI are rapidly developing. It is fundamental that these new markets stay competitive, and that nothing stands in the way of businesses growing and providing the best and most innovative products to consumers.
Additionally, the Commission is also looking into some of the agreements that have been concluded between large digital market players and generative AI developers and providers in order to assess the impact of those partnerships on market dynamics. The Commission confirmed that it is assessing whether the partnership between Microsoft and OpenAI might be reviewed under the EU Merger Regulation: Microsoft, a leading technology company, and OpenAI, an artificial intelligence research organisation, formed a strategic partnership to collaborate on the development and deployment of advanced AI technologies.
The partnership between Microsoft and OpenAI has raised concerns among other national competition authorities regarding potential competition law issues. While the partnership can accelerate innovation, it can also raise concerns about potential anti-competitive effects such as market foreclosure or the creation of barriers to entry for competitors.
The German competition authority held that the partnership does not constitute a notifiable merger even though Microsoft would gain “material competitive influence” over OpenAI. The German competition authority held nevertheless that it could launch a fresh probe if Microsoft deepened its control over OpenAI. On the contrary, the UK’s Competition and Markets Authority opened an investigation into Microsoft/OpenAI. The CMA has concerns that the partnership will give Microsoft control over OpenAI.
The Commission’s investigation into the partnership between Microsoft and OpenAI fits in the approach of the Commission in the Microsoft/Activision deal. The Commission clear this deal on 15 May 2023, making it subject to remedies. For more information, please see the latest Bird & Bird article.