Share

14 Jun 2024

ESMA issues guidance on the use of AI

briefing

Financial Regulation

Download PDF here

For further information on any of the issues discussed in this publication please contact the related contact(s) on this page.


On 30 May 2024, the European Securities and Markets Authority (ESMA) issued a public statement providing initial guidance to firms using Artificial Intelligence technologies (AI) when providing investment services to retail clients.

ESMA has published the guidance in light of the key obligations under MiFID II and to emphasise the imperative to always prioritise clients' best interests.

We set out below some of the key aspects of the guidance.

Uses and potential risks for firms and clients

ESMA notes the opportunities AI presents for efficiency, innovation and improved decision-making, which have the potential to transform the landscape of retail investment services.

Uses which are currently being explored or deployed by firms include customer support, compliance, fraud detection, risk management, and support to firms in the provision of investment advice and portfolio management, e.g. to analyse a client's knowledge and experience, financial situation (including risk tolerance), and investment objectives in order to provide personalised investment recommendations or manage and rebalance client portfolios.

The guidance also highlights the following risks:

  • Robustness/reliability of output, quality of training data, and algorithmic bias;

  • Lack of transparency and explainability / interpretability including opaque decision-making, neglecting the importance of human judgment;

  • The lack of accountability and oversights through over-reliance on AI by both firms and clients for decision-making;

  • Privacy and security concerns linked to the collection, storage, and processing of the large amount of data needed by AI systems.

ESMA expects firms to have in place appropriate measures to control the use of AI systems by employees in any form, including third-party AI technologies, whether in cases where the AI system has been formally adopted by the firm or in situations where the AI is used by employees without the knowledge and approval of senior management.

Client best interest and information requirements

Investment firms should be transparent on the role of AI in investment decision-making processes related to the provision of investment services. ESMA expects that when firms provide clients with information on how they use AI tools for the provision of investment services, they ensure that such information is presented in a clear and fair manner.

Similarly, investment firms using AI for client interactions, such as chatbots or other types of AI-related automated systems, should transparently disclose to clients the use of such technology during these interactions.

Organisational requirements

ESMA expects the firm’s management body to have an appropriate understanding of how AI technologies are applied and used within their firm and to ensure the appropriate oversight of these technologies. The implications of AI deployment should be regularly assessed, and appropriate adjustments made in response to evolving market conditions and the regulatory landscape.

Firms’ risk management frameworks should be tailored to address AI implementation and firms should conduct regular AI model testing and monitoring to identify and mitigate potential risks and biases. When using AI tools in investment decision-making processes, the creation, training, testing, validation, and continuous analysis of data are integral processes that require rigorous oversight to maintain the integrity and performance of the tools.

In cases where third-party AI tools are utilised, firms are reminded of the applicable MiFID II requirements regarding the outsourcing of critical and important operational functions aimed at ensuring an adequate level of due diligence in the selection process of service providers and the implementation of adequate controls.

Robust controls should be established to ensure the accuracy of information supplied to and/or utilised by AI systems in order to prevent the dissemination to clients of erroneous information or the provision of misleading investment advice. In addition, investment firms should implement sufficiently frequent ex-post controls to monitor and evaluate any process that involves the delivery of information directly or indirectly through AI-tools. These post-interaction assessments are considered crucial in ensuring ongoing compliance with MiFID II obligations and safeguarding clients against the dissemination of any inaccurate or misleading information.

Firms should ensure adequate training programs on the topic of AI for relevant staff, ensuring they are equipped to manage, interpret, and work with AI technologies. Training should cover not only the operational aspects of AI but also its potential risks, ethical considerations, and regulatory implications. Relevant staff should be equipped with the knowledge to identify and address issues such as data integrity, algorithmic bias, and unintended consequences of AI decision-making.

Conduct of business requirements

The use of AI technologies in the provision of investment advice and portfolio management services requires a heightened level of diligence, particularly in ensuring the suitability of the services and financial instruments provided to each client.

Firms should have robust controls to ensure that the AI systems used are designed and monitored in the context of the firm’s product governance and suitability requirements and rigorous quality assurance processes should be implemented. These processes should include thorough testing of algorithms and their outcomes for accuracy, fairness, and reliability in various market scenarios and periodic stress tests should be conducted to evaluate how the AI system performs under extreme market conditions.

In addition, firms should ensure strict adherence to data protection regulations to safeguard any sensitive client information collected for the purpose of the provision of investment services.

Record Keeping

In line with MiFID II requirements, ESMA expects investment firms to maintain comprehensive records on AI utilisation and on any related clients’ and potential clients’ complaints. These records should include all aspects of AI deployment, including the decision-making processes, data sources used, algorithms implemented, and any modifications made over time.

Next steps

Looking forward, ESMA and the National Competent Authorities (NCAs) will continue monitoring the use of AI in investment services and the relevant EU legal framework to determine if further action is needed in this area.

ESMA encourages firms to engage with their supervisory authorities to navigate AI-related challenges.

A copy of the ESMA Statement is available here.

If you have any queries regarding the content of this article, please contact the authors or your usual Dillon Eustace contact.

DISCLAIMER: This document is for information purposes only and does not purport to represent legal advice. If you have any queries or would like further information relating to any of the above matters, please refer to the contacts above or your usual contact in Dillon Eustace.


Copyright Notice: © 2024 Dillon Eustace LLP. All rights reserved.

Key Contacts