The Competition & Markets Authority (CMA), legal profession and economists are increasingly relying on AI/ML.
- Growing CMA capabilities: The CMA continues to make substantial investments in its technology capabilities. The CMA’s Data, Technology and Analytics unit can gather, process and analyse extremely high volumes of structured and unstructured data using modern AI/ML techniques. The CMA uses these capabilities to gain, sometimes in real time, insights about online and traditional/offline marketplaces. The CMA’s Online platforms and digital advertising market study and the work done by its COVID-19 Taskforce reflect the agency’s growing AI/ML capabilities.
- Current uses in law and economics: Lawyers and economic consultants use AI/ML techniques for purposes that include document analysis, gathering and analysing large volumes of data, text analytics, and social media analytics.
The regulatory framework governing the use of AI/ML techniques involves varied legislation, including, but not limited to, GDPR.
- General Data Protection Regulation: GDPR provides guidance to companies about the use of AI/ML algorithms. In particular, GDPR defines obligations related to ‘accountability’ and ‘interpretability.’
- Information Commissioner’s Office: The recent guidance from the ICO on interpretability also provides a useful framework for analysing compliance.
- Beyond GDPR: Further guidance about the use of AI/ML may come from rulings based on the Equality Act 2010, on human rights laws, and from regulators. For example, in 2019 the UK Financial Conduct Authority (FCA) published a research agenda that included themes focussed on technology, big data and AI. The FCA subsequently released a data strategy in 2020 that emphasised smarter ways to use data, as well as advanced analytics to transform the way it regulates.
What role might AI/ML techniques play in future litigation?
- From support to centre stage: AI/ML techniques currently play significant roles in support of litigation, but relatively little AI/ML has been the subject of expert economic witness testimony. This may change in the future.
- AI/ML techniques provide an additional tool: AI/ML techniques are sometimes criticised as ‘data mining’: good at predicting but not at establishing causality. However, in many economic contexts there is no inherent tension between ML and more traditional techniques (e.g., regression). For example, in damages estimation, both ML and regression aim to compare the results from two prediction problems, facilitating a like-for-like comparison between the ‘actual’ and ‘but-for’ worlds. Used appropriately, ML techniques help to establish a causal relationship between conduct and outcomes.
- Benefits of applying ML techniques: Beyond the ability to handle big data in any economic quantification exercise (including damages estimation), AI/ML techniques have the potential to produce more robust results, and to verify the robustness of results obtained via more traditional methods. For example, AI/ML techniques can play a role in commonly undertaken pieces of economic analysis, such as demand estimation for merger analysis.
|
AI/ML techniques open the possibility of new theories of harm in competition cases.
- Potential for detrimental consequences of algorithms in practical settings: Tacit coordination through pricing algorithms is a theoretical possibility, but more rigorous empirical analysis is needed. Other potential theories of harm may include:
- Abusive personalisation (e.g., price discrimination through behavioural targeting)
- Manipulation of ranking and recommendation algorithms (e.g., ACCC v. Trivago case on the use of algorithms to bias ranking)
- Addictive design and exploitative technology
- Insufficiently effective use of ML/AI to remove harmful content
- Potential remedies: There may be scope for both indirect and direct regulation. Indirect regulation may include standards and guidance for algorithmic risk assessment and impact evaluations. Direct regulation could include monitoring requirements.
Businesses can take several steps to prepare for AI/ML-related legal risks.
- Potential actions to ‘know your AI’: Maintain human oversight, practice sound record-keeping, maintain transparency, retain senior stakeholder engagement, and codify appropriate policies and procedures.
- Maintain accountability: Since the overall accountability for data protection compliance lies with the controller of an organisation, senior management should not delegate compliance to the data science and engineering teams.
|