This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Competition & EU law insights

Keeping you up to date on Competition & EU law developments in Europe and beyond.

| 16 minutes read

Artificial Intelligence: Competition and antitrust’s next frontier?

The advent of applications like Open AI’s ChatGPT and Dall-E has generated lots of discussion and debate about the broad impacts that artificial intelligence (AI) will have on societies and economies. An important aspect of that is whether the current regulatory ‘toolkit’ is sufficient to regulate generative AI.

AI presents many appealing economic features that promise to advance the wellbeing of consumers. Its use could improve information flows and facilitate more transparent, efficient markets. It could reduce barriers to entry for small companies, helping them to automate complex process and allowing them to scale faster.

At the same time, competition issues are starting to emerge, both in relation to the dynamics of AI markets themselves and in relation to how AI is used in other sectors of the economy. This article considers these issues with a particular focus on the dynamics of the generative AI sector, as well as the risks of AI facilitating tacit collusion and entrenching existing power in digital markets.

Emerging competition issues in the generative AI sector

Most debate on the competition challenges arising out of AI’s rapid adoption concern the dynamics within AI ecosystems. In particular, the generative AI sector has been quickly identified by competition authorities as being susceptible to anticompetitive conduct.

Digital Competition Communiqué  issued by the G7 competition authorities in November 2023 (G7 Communiqué) identifies that because generative AI models rely on market participants having access to multiple key inputs to succeed, control over these inputs could be leveraged to harm competition. In particular, control over one or more inputs may enable conduct that increases barriers to entry, reduces innovation, or provides opportunities for other unilateral anticompetitive conduct. 

The G7 Communiqué’s thesis is, perhaps unsurprisingly, consistent with commentary that is starting to emerge from competition authorities in major global economies. For example:

  • In the UK, the Competition and Markets Authority (CMA) published an initial report on AI ‘foundation models’ (FMs) in September 2023 (see this article for Bird & Bird’s insights on this report), highlighting concerns in relation to competition in the development of FMs, and the impact of FMs on competition in other adjacent markets.
  • In the US, the Federal Trade Commission (FTC) has been particularly vocal in raising concerns about antitrust and access to inputs, including, for example, in this June 2023 blog post. Its chair, Lina Khan, has also observed that there is a ‘risk that some of the existing incumbents would use control over … inputs to undermine innovation and competition’.[1]
  • The Australian Competition and Consumer Commission (ACCC), as part of Australia’s Digital Platform Regulator Forum (DP-REG), has published a working paper identifying concerns about control of inputs and the potential for large language models to increase anticompetitive conduct in adjacent digital markets.[2]
  • Meanwhile, the European Commission is currently calling for stakeholder contributions on competition in the generative AI sector.

The ‘building blocks’ of generative AI

There is an emerging consensus around the key inputs for generative AI technology:

1.Data

Massive datasets, whether languages or images, are the foundation of generative AI. Because generative AI models are trained using those datasets, the calibre (and commercial viability) of a model turns on the quantity and quality of data used. It follows that firms will seek to obtain and/or maintain a data advantage over their competitors. 

As the ACCC has recognised, it is not uncommon (nor is it anticompetitive) for first movers to be rewarded for taking on risks by having access to certain data.[3] However, the high volume of data required to train a generative AI model means that barriers to entry and expansion are high. Competition concerns may therefore arise in connection with how dominant firms obtain and use data, or in connection with how they might restrict competitors’ access. 

Firms that have gained data advantages as a result of substantial data collection in the course of providing services in adjacent markets (for example, providers of digital platform services or internet-connected devices) are likely to come under particular scrutiny. 

2. Computing hardware and resources

The computing power required to train and deploy generative AI models relies on access to specialised processors which are manufactured by only a small number of firms. With demand for these processors vastly exceeding supply (even prior to the emergence of generative AI in 2023), their high cost creates barriers to entry that few firms can meet.

New entrants therefore generally obtain computing via cloud services that offer computational resources on demand.[4] However, the U.S. FTC has expressed concerns that the types of cloud services capable of providing sufficient computing power are only provided by a small number of firms. The ACCC has observed that many of these firms have their own proprietary generative AI application and will therefore be competitors of new entrants who seek to acquire their computing power.[5]

3. Talent and technical expertise

Generative AI models also rely on scarce labour expertise. Given the specificity of the skills required to build and deploy a generative AI model, it can be ‘difficult to find, hire, and retain’ talent.[6]  Firms may find that this provides an incentive to ‘lock-in’ expertise to stifle competition. The CMA has flagged concerns that incumbents may also be able to acquire talent more easily by hiring staff with high salaries or investing in training existing software engineers. 

What types of conduct are competition authorities concerned about?

Given the commentary emerging from competition authorities, we expect to them become more active in the sector quite soon. We have identified below some areas that are likely to be subject to particular attention.

1.M&A activity 

Competition authorities have highlighted concerns that M&A activity in the sector could lead to a lessening of competition in markets for the development and deployment of generative AI applications. Vertical transactions are likely to come under particular scrutiny, given the potential competitive harms that could arise where a transaction forecloses access to key inputs, for example unique and essential data sets. Likewise, authorities are set to carefully scrutinise horizontal transactions by which incumbents seek to acquire nascent competitors given concerns about ‘killer’ and ‘creeping’ acquisitions in the technology sector.

Competition issues may also be more likely to arise in relation to specialised or highly regulated domains where data is not widely available (such as healthcare or finance).[7] There is likely to be increasing debate around whether existing merger clearance frameworks are sufficient to address these concerns. Indeed, this is already happening in Australia in the context of digital platforms more generally.

2. Bundling, tying and self-preferencing

The dynamics of the generative AI sector may enable anticompetitive unilateral conduct that, for example, contravenes prohibitions on the abuse of dominance (in the EU and UK) or the misuse of market power (in Australia).

In particular, each of the FTC, ACCC, and the CMA have flagged concerns that vertically integrated firms that offer generative AI applications as part of a broader technology product and service ecosystem may have incentives to engage in discriminatory behaviour. The concern identified is that digital platforms with commercial interests in generative AI models may restrict their customers from accessing data to develop competing generative AI models. Similarly, incumbent firms that offer both generative AI products and cloud computing services may provide computation services to new entrants on discriminatory terms, for example by increasing price or reducing quality.

A related concern may arise where generative AI providers offer their applications directly to consumers while simultaneously offering an API that allows other companies to ‘white label’ the underlying AI model and build their own application (for example, OpenAI offers both ChatGPT and API access to the underlying GPT-4 model). The risk here is that incumbent firms will seek to offer access to its API on terms that protect their dominant position.

Finally, the FTC, ACCC and CMA have raised concerns firms operating in adjacent digital platform services markets might link the availability of generative AI services to the use of their other products (e.g., search engines, web browsers or other software, or operating systems).  Alternatively, generative AI applications might be used by such firms to self-preference products or services within their ecosystem. 

AI and the risk of algorithmic collusion

Competition concerns do not just arise within the AI ecosystem itself, but also in the use of AI within the broader economy.  The use of complex algorithms is now ubiquitous. While algorithms provide significant benefits to both businesses and consumers, their use also raises important legal and ethical challenges relating to transparency, data governance, and automation.[8] One of the key risks from an antitrust standpoint is algorithmic collusion (i.e., conducted or facilitated by algorithms). 

The great 17th century French philosopher Descartes believed that it would be morally impossible for a machine to have “enough different dispositions to make it act in every human situation in the same way as our reason makes us act”.[9]  In the Cartesian worldview, human intelligence is a universal tool, which can be used for any situation, whereas machines are only able to act in particular situations. 

Whether you subscribe to this view or not, intellectual property lawyers have already been asked to consider whether AI software can be considered to be an inventor. It is likely that in the near term, competition lawyers will be required to consider whether AI can enter into a contract, arrangement or understanding and if so, where will liability rest? For the purposes of concerted practices, can a purpose of SLC be readily inferred? If an AI system causes harm, should liability be attributed to the manufacturer, developer, or company using the AI? Should human supervision over autonomous systems be mandated?

1.So, what is an algorithm? 

Broadly, algorithms are decision-making processes that, given particular data, “automate computational procedures to generate decisional outcomes”.[10] As explained by Chan, they can generally be grouped into two categories: “adaptive” and “learning” algorithms.[11] An adaptive algorithm is simpler and will make a decision “based on its instructions, after observing relevant information in the marketplace”.[12]

More complex algorithms – “learning” or “self-learning” algorithms - can be constructed to refine their own decision-making processes through machine learning and AI. This is the process by which computers obtain knowledge by studying data patterns, allowing problems to be worked out from experience. In essence, computers can learn and develop their decision-making processes over time. 

Pricing algorithms raise particular legal and ethical concerns. These algorithms use machine learning and AI to determine and set optimal prices in real-time. There is a risk that these algorithms may facilitate coordination or collusion.

2. What is algorithmic coordination or collusion?

In their influential work, Ezrachi and Stucke outline four scenarios where algorithmic coordination or collusion may arise[13]:

Messenger

Humans reach an agreement to collude, and an algorithm is used as a “messenger” to implement or monitor it. Here, computers execute the will of humans and facilitate existing collusion.

 

Hub and Spoke

 

Competitors use a single algorithm to decide prices (e.g., competitors outsource their pricing upstream and ultimately all adopt the upstream provider’s algorithm). A group of similar vertical agreements may lead to a hub-and-spoke scenario, where the algorithm coordinates pricing as the “hub”. 

 

Predictable Agent

 

Companies unilaterally create and implement their own pricing algorithms (i.e., there is no agreement between businesses). However, these algorithms monitor and quickly adapt to each other’s prices and perform as “predictable agents”. There is an increased danger of conscious parallelism or tacit collusion.[14]

 

Digital Eye

 

Computers learn and independently make profit-maximising decisions. Humans are separated from decisions made by the algorithm and there is no human anticompetitive agreement or intent. Humans may not be aware of the issue, but this ultimately results in collusion. 

 

 

 

 

 

 

 

 

 

 

 

 

 


The last two scenarios are most likely to be facilitated by generative AI.

The risk of algorithmic collusion (and the potential impact on the economy) should not be underestimated, particularly as there have already been several enforcement actions relating to collusive arrangements and algorithms.[15] For example, in late 2023, the Italian competition regulator launched an ex-officio fact-finding investigation on the use of pricing algorithms in passenger air transport (read our article here). While, in the United States a proposed class action has been initiated by renters alleging that landlords engaged in cartel conduct by delegating price setting to a company, RealPage Inc that uses an AI powered tool to calculate optimal rents[16].

The CMA has previously found that algorithmic-facilitated collusion may be more likely in markets already prone to collusion (e.g., due to homogeneity of products).[17]

3. Australia’s competition laws 

The provisions of Australia’s competition laws that are most relevant to algorithmic pricing and collusion are the cartel conduct and concerted practices prohibitions.

Cartel conduct occurs where competitors or potential competitors enter into an agreement or arrangement to fix prices, share markets, restrict outputs, or rig bids. Cartels are prohibited per se. It does not matter whether the conduct has the purpose or effect of decreasing competition. 

One of the key elements to establish cartel conduct is the requirement for an anticompetitive contract, arrangement, or understanding (CAU). This phrase refers to a range of consensual dealings from the formal (a contract) to the informal (an understanding). An understanding requires a “meeting of the minds” and a “commitment” by one of the parties to a course of action.[18]

The concept of a concerted practice is relatively new to Australia’s competition laws and remains largely untested. Nevertheless, it involves conduct that falls short of a CAU, but involves communication or cooperative behaviour with the purpose, effect, or likely effect of substantially lessening competition (SLC) in a market. It captures behaviour that goes beyond an individual responding to the market independently.[19]

4. Do Australia’s competition laws prohibit these forms of collusion?

Cartel Conduct 

It is likely that, where competitors (or potential competitors) agree to collude and use algorithms to facilitate and monitor a cartel (i.e., the “Messenger” scenario), this behaviour will fall foul of Australia’s cartel laws and attract liability. In this scenario, it would be possible to establish a “meeting of the minds” and a commitment, so long as there is enough evidence. A real-life example of the “Messenger” scenario is the case of United States v Topkins,[20] where David Topkins was found liable for price-fixing on the Amazon Marketplace with other sellers, through the use of similar pricing algorithms. In another example, the CMA investigated price fixing involving the use of algorithmsin the Trod Ltd/GB eye Ltd case. The two parties agreed to a ‘classic’ horizontal price-fixing cartel for posters and frames sold on Amazon’s UK website and implemented it using automated repricing software.

The use of a single algorithm in the “Hub and Spoke” scenario, where the algorithm acts as a “hub” to coordinate industry pricing, may also amount to prohibited cartel conduct although establishing a CAU may be more challenging.  

On the other hand, one of the attributes of the “Predictable Agent” and “Digital Eye” scenarios is that algorithmic collusion may take place without human communication (and even without the knowledge of humans). While an “understanding” can be tacit and arrived at without express communication,[21] there must still be a meeting of the minds and likely some type of communication.[22] In the absence of communication, tacit and autonomous algorithmic collusion is unlikely to give rise to liability under the cartel provisions. 

The use of algorithms also raises challenges in determining intent. If an algorithm is used as a “messenger” to implement a cartel, it may be possible to infer the intent of the participants using the AI. The internal processes of a simple algorithm could be reverse engineered and used as evidence of intent.[23] However, it is more difficult to find evidence of intent where complex algorithms are deployed. As explained by Zheng and Wu, these algorithms can produce uncontrollable and unpredictable outcomes.[24] There are also challenges in detecting algorithmic collusion given regulator’s lack of oversight and due to the newfound ability of algorithms to encrypt messages autonomously.[25]

Concerted Practices

The next logical step is to consider whether liability for algorithmic collusion may arise under the concerted practices prohibition, which is interpreted to capture conduct that falls short of a contract, arrangement, or understanding, but nevertheless impacts competition. 

It is likely that the prohibition on concerted practices will capture some cases of algorithmic collusion in the “Messenger” and “Hub and Spoke” scenarios, so long as it could be proven that the conduct has the purpose, effect, or likely effect of SLC. 

On the other hand, in the “Predictable Agent” scenario, companies unilaterally create and implement their own pricing algorithms, and in the “Digital Eye” scenario, self-learning algorithms (or ‘digital eyes’) are unilaterally adopted. It’s not clear how unilateral conduct of this nature – without more - could contravene the concerted practices prohibition, which still requires communication or cooperative behaviour between two or more firms. This is one of the reasons why Nicholls and Fisse argue that the concerted practices prohibition is “fundamentally incapable of dealing adequately” with the Predictable Agent and Digital Eye scenarios.[26] The authors also query how a corporation can “engage in” a concerted practice if there is no assent from a human agent (in the “Digital Eye” scenario).

There is also the practical concern that this may be a case of “catch me if you can”.  In circumstances where technology is moving rapidly, to the point where many developers do not understand exactly how their software operates or makes decisions, and given the limited number of experts, is it reasonable to expect regulators or law enforcement to be able to identify collusive behaviour that is implemented by algorithms? 

The regulation of algorithms moving forward

There appear to be real questions as to whether the current enforcement tools are sufficient to address algorithmic collusion, particularly in the Predictable Agent and Digital Eye scenarios.

A range of measures have been proposed to address the potential anticompetitive impacts of algorithms. This ranges from proposals to update existing laws (e.g., expanding the concept of “agreement” to include algorithmic collusion)[27], to the introduction of new laws targeted at algorithms (e.g., a per se prohibition on certain pricing algorithms[28], or a prohibition on coordination that results in harm[29]).

In Australia, the Government has recently supported the ACCC’s proposal to introduce mandatory, service-specific codes for digital platforms, with obligations to tackle conduct such as anticompetitive self-preferencing or tying. These codes may provide a mechanism to introduce measures relating to algorithms. For example, in its fifth interim report, the ACCC considered the need to handle transparency in relation to algorithms. 

The ACCC’s (and the Government’s) approach will also be informed by changes overseas. For example, in the UK, the Digital Markets, Competition and Consumers Bill is due to come into force in Spring 2024 (see our article here). It will allow the CMA to designate digital firms with strategic market status and have the power to develop specific tailored conduct requirements for these firms. 

If laws are made to target algorithmic collusion specifically, the ACCC and other enforcement bodies will need to strike the right balance so as to not dampen competition, innovation, and the benefits of algorithms in the marketplace. 

If you need more information or further guidance in this area, please contact Thomas JonesMatthew BovairdPatrick Cordwell, or Dylan McGirr

The authors acknowledge the contributions of Saskia King and Quinn Liang on aspects of UK competition law and regulation. 

[1] Stanford University (Institute for Economic Policy Research), FTC's Lina Khan warns Big Tech over AI (Web Page, 3 November 2023) <https://siepr.stanford.edu/news/ftcs-lina-khan-warns-big-tech-over-ai>.

[2] The Government also plans to introduce legislation to regulate the use of high-risk AI (e.g. in areas such as healthcare, law enforcement, and job recruitment). See Sydney Morning Herald, New Laws to Curb Danger of High Risk Artificial Intelligence (Web Page, 14 January 2024) <https://www.smh.com.au/politics/federal/new-laws-to-curb-danger-of-high-risk-artificial-intelligence-20240111-p5ewnu.html>.

[3] Rod Sims, ‘The ACCC’s approach to colluding robots address” (Speech, 2017).

[4] Federal Trade Commission, Generative AI raises competition concerns (Web page, 29 June 2024) <https://www.ftc.gov/policy/advocacy-research/tech-at-ftc/2023/06/generative-ai-raises-competition-concerns>.

[5] Federal Trade Commission, Generative AI raises competition concerns (Web page, 29 June 2024) <https://www.ftc.gov/policy/advocacy-research/tech-at-ftc/2023/06/generative-ai-raises-competition-concerns>.

[6] Federal Trade Commission, Generative AI raises competition concerns (Web Page, 29 June 2024) <https://www.ftc.gov/policy/advocacy-research/tech-at-ftc/2023/06/generative-ai-raises-competition-concerns>.

[7] Federal Trade Commission, Generative AI raises competition concerns (Web Page, 29 June 2024) <https://www.ftc.gov/policy/advocacy-research/tech-at-ftc/2023/06/generative-ai-raises-competition-concerns>.

[8] See S. C. Olhede and P. J. Wolfe, ‘The growing ubiquity of algorithms in society: implications, impacts and innovations’ (2018) Philos Trans A Math Phys Eng Sci

[9] Rene Descartes, Discourse on Method and Related Writings (Penguin Books, 1999), 41.

[10] Michal S Gal, ‘Algorithms as Illegal Agreements’ (2019) 34(1) Berkeley Technology Law Journal 67, 77.

[11] Jeremy Chan, ‘Algorithmic Collusion and Australian Competition Law: Trouble Ahead for the National Electricity Market?’ (2021) 44(4) UNSW Law Journal 1365.

[12] Ibid, 1372. 

[13] Ariel Ezrachi and Maurice E Stucke, Virtual Competition: The Promise and Perils of the Algorithm-Driven Economy (Harvard University Press, 2016).

[14] Tacit collusion occurs where companies recognise their shared interests and unilaterally set prices at a supracompetitive level.

[15] See e.g. United States v Topkins (ND Cal, CR 15-00201-001 WHO, 22 March 2017).

[16] See Competition Policy International, RealPage Antitrust Suit could be legal test for AI-driven collusion (Web Page, 11 December 2023) <https://www.pymnts.com/cpi_posts/legal-test-unveiled-in-realpage-antitrust-suit-over-ai-driven-price-setting/>.

[17] See Competition and Markets Authority, Pricing Algorithms (Economic Working Paper, 2018). 

[18] See Apco Service Stations Pty Ltd v Australian Competition and Consumer Commission (2005) 159 FCR 452

[19] See ACCC, ‘Guidelines on Concerted Practices’ (2018).

[20] United States v Topkins (ND Cal, CR 15-00201-001 WHO, 22 March 2017).

[21] Australian Competition and Consumer Commission (ACCC) v Colgate-Palmolive Pty Ltd (No 4) (2017) 353 ALR 460.

[22] See Chan (n 9), page 1384.

[23] See Zheng and Wu, ‘Collusive Algorithms as Mere Tools, Super-Tools or Legal Persons (2019) 15(2-3) Journal of Competition Law & Economics 123. 

[24] Ibid. 

[25] Gal (n 8). 

[26] Rob Nicholls and Brent Fisse, ‘Concerted Practices and Algorithmic Coordination: Does the New Australian Law Compute?’ (2018) 26(1) Competition and Consumer Law Journal 82, 102.

[27] See OECD, Algorithms and Collusion: Competition Policy in the Digital Age (Report, 2017). 

[28] See Joseph Harrington, ‘Developing Competition Law for Collusion by Autonomous Artificial Agents’ (2018) 14(3) Journal of Competition Law and Economics 331.

[29] Rob Nicholls and Brent Fisse (n 25).

Tags

competition law, eu law, antritrust, antitrust law, artificial intelligence, chatgpt, ai, generative ai, europe, competition & eu law, australia, competition and ai, dall-e, uk, ai risks, algorithm, self-preferencing, unfair ai