Algorithmic pricing under the antitrust microscope: lessons from recent developments
August 2024 | SPECIAL REPORT: COMPETITION & ANTITRUST
Financier Worldwide Magazine
August 2024 Issue
The use of pricing algorithms is increasingly common across many industries in the modern economy, as businesses have turned to cutting-edge technology to better understand and respond to ever-increasing amounts of market data. But in the past two years, the practice has also become the focus of increasing antitrust scrutiny from both the private plaintiffs’ bar and government enforcers, who have questioned whether the use of such technology might facilitate, or even constitute, price fixing among competitors. While the legal viability of these theories is still being tested, initial decisions from courts considering private complaints and statements of interest explaining enforcers’ positions have offered insight into strategies to mitigate the risk of potential litigation and investigation.
Background on algorithms and recent antitrust developments
Pricing algorithms are computer programmes that provide pricing recommendations or, in some cases, automatically adjust pricing based on current and past data about market conditions. These algorithms consider many of the same types of data points that businesses have always used to make pricing decisions, including historical data, as well as current indicators of supply and demand in the market. But compared to human pricing managers, algorithms can process many more data points at a much faster rate, often through the use of artificial intelligence or machine learning techniques. This efficiency allows companies employing algorithmic pricing strategies to respond more rapidly to changes in supply or demand and make pricing decisions based on a more accurate, real-time understanding of changing market conditions – often leading to increased revenue and prices that better reflect current demand.
But the very efficiency that makes pricing algorithms so attractive to companies has also provoked fears from some that the technology may be used to more efficiently violate antitrust laws. These fears have resulted in a flurry of recent civil antitrust lawsuits alleging price-fixing conspiracies facilitated using algorithmic pricing technology provided by a common third-party vendor. And government enforcers have also indicated their belief that the use of pricing algorithms may violate antitrust laws.
The wave of recent civil litigation began in October 2022 with the filing of the first putative class action complaint alleging a conspiracy among landlords to inflate the prices of multifamily rental housing via the concurrent use of a software company’s pricing algorithms. This complaint was eventually consolidated with over 40 follow-on lawsuits in the Middle District of Tennessee. And similar lawsuits have since been initiated alleging algorithmic collusion involving other pricing algorithms used for multifamily housing, casino hotels in Las Vegas and Atlantic City, luxury hotels, and major health insurers.
These suits, which name both algorithm providers and their customers as defendants, allege that defendants have conspired to fix prices at artificially inflated levels by agreeing to accept prices recommended by shared third-party algorithms. Plaintiffs advance a theory that these arrangements are akin to traditional ‘hub-and-spoke’ conspiracies, with the algorithm providers serving as ‘hubs’ and facilitating collusion among their customers, which act as the ‘spokes’. The purported conspiracies then consist of a combination of both vertical agreements between each individual customer and the algorithm provider, as well as alleged horizontal agreements among the customers themselves. In many cases, plaintiffs also allege that the algorithm makes recommendations based on the combined commercially sensitive pricing and supply data supplied by each of its customers.
Government enforcers have also begun articulating their antitrust analysis of the use of pricing algorithms. The Federal Trade Commission and the Department of Justice (DOJ), for example, have stepped into three of the private lawsuits to file statements of interest announcing their view of the law. In the consolidated multifamily housing litigation, the DOJ urged the court to reject defendants’ motion to dismiss, arguing that section 1 of the Sherman Act may be violated (potentially under the per se liability standard) when companies combine sensitive non-public information in an algorithm they rely on to make pricing decisions with the knowledge that their competitor will do the same. In two other statements, the DOJ argued that a conspiracy could exist even if companies are not obligated to accept recommended prices, and that communications among alleged conspirators are not required in order to state a claim. Rather, in the DOJ’s view, it is price fixing for competitors to jointly delegate key aspects of their pricing to a common algorithm, even if the competitors retain authority to deviate from the algorithm’s recommendations and even if each competitors’ adoption of the algorithm is not close in time to the others’ adoptions. Courts are not required to accept these opinions – and the courts that have considered them so far have not – but they preview the arguments the DOJ might someday advance in its own enforcement actions.
While the DOJ has yet to file any cases based on its theories, there are reports of an ongoing DOJ investigation. At the state level, enforcers in Arizona and the District of Columbia have also shown interest in pricing algorithms, filing two civil actions alleging collusion in the multifamily rental housing market.
Risk considerations
The recent antitrust scrutiny of algorithmic pricing creates potential risk for companies. In private actions, companies found to have violated the Sherman Act could be liable for treble damages in any civil litigation. The use of the class action mechanism in US civil litigation can further increase exposure and create pressures to settle before courts and juries can definitively address the merits of plaintiffs’ claims. The recent private actions involving algorithmic pricing are still working their way through the courts, and so far, plaintiffs have encountered mixed success. While one case brought against casino hotels in Las Vegas was dismissed by the district court, others have already survived motions to dismiss. And the litigation cost and potential exposure from such actions have already prompted some defendants to reach settlements.
Traditionally, the DOJ has reserved its criminal enforcement for price fixing and routinely brings felony charges against corporations and culpable individuals for price fixing. Thus, given its views, there is also the potential risk that the DOJ may pursue its theories through criminal charges against companies and individuals employing algorithmic pricing. Criminal enforcement actions could bring consequences including corporate fines of up to $100m or twice the gain or loss from the offence and, for individual defendants, up to 10 years of imprisonment. Although it is far from clear that the DOJ would bring an action – much less convince judges to accept its interpretation of the Sherman Act or a jury to convict for mere use of commercially available software – the government’s demonstrated interest in pursuing these theories nonetheless poses risks if investigations turn up compelling facts.
Strategies for mitigation
In the face of these recent developments, companies can take steps to mitigate the potential risk of litigation and investigation. Any assessment of risk starts with determining how the algorithm functions, including the source of the data it uses, both for training the algorithm and for generating pricing recommendation, and any limits on how data from different competitors is used. Then several mitigation steps can be considered, albeit weighed against the practical necessities of the business operation. These strategies should not be mistaken for legal requirements, nor does their absence indicate wrongdoing, let alone amount to a violation of law. Rather, in an evolving landscape, they reflect preliminary insights on ways to reduce risk.
First, a company can mitigate risk if the pricing algorithm generates recommendations based only on public information (and the company’s own private data). In other words, to the extent that a pricing algorithm considers the prices or other information of competitors, the potential for antitrust litigation may be lower if this data is drawn exclusively from publicly available sources. Additionally, companies can consider steps to control the use of their own data by the algorithm provider. Thus, in dismissing with prejudice the Las Vegas casino hotel litigation, the court emphasised the lack of allegations of confidential data exchange.
Another strategy for mitigating risk is for companies to treat algorithm-derived pricing recommendations as just one point of data to help inform their own independent pricing decisions. As was the case for Las Vegas casino hotels, companies can lower risks by avoiding any commitment or agreement to adopt the prices recommended by a pricing algorithm. Doing so undercuts potential arguments that companies have ‘jointly delegated’ their decision making to a common third party.
At the same time, companies can also mitigate risks by documenting their independent business decisions. By keeping a clear record of their decision making regarding the use of pricing algorithms and setting prices, companies can put themselves in a better position should they ever face suspicions of anticompetitive activity or concerted action involving their pricing.
Finally, companies should exercise care when communicating with their competitors about the use of algorithms to set prices. Depending on the nature of such communications, they could be used by private plaintiffs or government enforcers to argue that a horizontal agreement exists among an algorithm provider’s customers.
Ultimately, courts may determine that the use of pricing algorithms, on their own, does not run afoul of the antitrust laws at all. The increased access to information about market conditions algorithms provide may facilitate more informed, competitive pricing rather than anticompetitive conspiracies. But given the flurry of activity from civil plaintiffs and posturing from government enforcers, companies should nevertheless take care to assess their use of algorithmic pricing technology and, where business considerations permit, can take steps to reduce potential risks.
Boris Bershteyn and James Fredricks are partners and Thomas Smith is an associate at Skadden, Arps, Slate, Meagher & Flom LLP and Affiliates. Mr Bershteyn can be contacted on +1 (212) 735 3834 or by email: boris.bershteyn@skadden.com. Mr Fredricks can be contacted on +1 (202) 371 7140 or by email: james.fredricks@skadden.com. Mr Smith can be contacted on +1 (212) 735 3829 or by email: thomas.smith@skadden.com.
© Financier Worldwide
BY
Boris Bershteyn, James Fredricks and Thomas Smith
Skadden, Arps, Slate, Meagher & Flom LLP and Affiliates
Q&A: Antitrust challenges in the pharmaceutical industry
Algorithmic pricing under the antitrust microscope: lessons from recent developments
Generational changes to Canada’s competition law framework
CMA’s increased powers in UK competition law investigations
The shifting boundaries of competition law in Europe
Belgium ends merger control in the hospital sector
Japan’s challenge to encourage competition in the smartphone software market