„Innovations optimize your receivables management. Subscribe to the atriga newsletter and be the first to know!“
EU AI Act meets GenAI bots: what the financial sector needs to consider now
On 21 May 2024, the EU member states adopted the EU AI Act – the first comprehensive regulation worldwide to lay down rules for the use of artificial intelligence.
The legal framework is intended to provide security for consumers and companies in Europe by creating clear standards for the use of AI technologies. The regulation addresses potential risks such as data breaches, discrimination and security vulnerabilities. To ensure adequate protection, AI systems are categorised into different risk classes, with stricter requirements applying to technologies with a higher risk potential.
The first bans on certain AI applications will now take effect from February 2025, while most of the regulations will not apply until August 2026.
We spoke to atriga Data Protection Officer Kristin Pagnia about the EU AI Act and what it means for the use of AI systems in the financial sector.
Mrs Pagnia, what is behind the EU AI Act and what are the specific aims of the regulation?
The EU AI Act is the first comprehensive regulation to create clear rules and a framework for the use of artificial intelligence (AI) in Europe. The EU aims to promote innovation while at the same time addressing the risks of AI and ensuring that people’s fundamental rights are protected. Companies will be enabled to use pioneering technologies. At the same time, the possibility of crossing borders with AI is restricted. This is important because AI could make many things possible that are ethically unacceptable. In companies, this includes, among other things, selection procedures for applicants that could be used excessively with AI. The Al Act sets clear limits to the imagination here. In order to guarantee the protection of fundamental rights, it is important to establish this idea of protection at an early stage of AI development and to set ethical limits to the economic interests of companies.
When do you expect the first landmark judgements and what could they mean for practice?
The AI Act has been in force since August 2024, but most of the provisions of the AI Regulation will only apply from 1 August 2026, so it will be years before the first judgements on the AI Regulation are legally binding. In this respect, companies will have to live with some uncertainties for a while yet. Nevertheless, it makes sense to get to grips with the new regulations now and tackle their implementation.
Kristin Pagnia, lawyer and internal data protection officer at atriga: “Companies that use AI or are planning to use it should already be familiarising themselves with the provisions of the AI Regulation and starting to implement them.”

What impact do the risk classes have on the use and further development of AI? And what exactly does it mean when an application is categorised as “high risk”?
The AI Act divides AI systems into different risk classes, using a risk-based approach:The higher the risk, the stricter the obligations. The AI Act distinguishes four risk levels for AI applications as well as a group for AI models with a general purpose. According to Article 49 of the AI Act, high-risk AI systems must be registered in the EU database for high-risk AI systems. The requirements for ‘high-risk’ AI systems are significantly higher than for lower-risk systems. Providers and operators of AI systems should therefore carefully weigh up the level of risk they wish to work with when using and further developing AI.
What do companies need to consider regarding the transparency of GenAI chatbots or voicebots? What steps should companies take from a data protection perspective to ensure that GenAI bots meet these requirements?
The regulation requires special transparency for AI applications that interact with humans. The AI Regulation classifies AI systems that interact with humans as AI systems with limited risk. Chatbots belong to this group. For the legally safer use of chatbots, the EU requires that chatbots that work with AI must be transparent. This means that users must be informed that they are chatting or talking to an AI. In addition to information about the interaction with an AI, the user should also be able to understand the decisions made by the AI. This should enable users to make a decision for or against the further use of AI at any time.
The regulation threatens high penalties for non-compliance. How are companies preparing for this?
The EU is working with a three-tier sanctions concept in which the possible sanctions even go beyond the possible fines under the General Data Protection Regulation (GDPR). The year 2026 is coming faster than anyone thinks possible. In this respect, companies that use AI or are planning to use it should already be dealing with the provisions of the AI Regulation and start implementing them now. This also includes a well-developed compliance concept.
What are the specific challenges for GenAI bots in the financial sector?
In the financial sector, dialogs with bots often contain particularly sensitive and sensitive data. Therefore, correspondingly high security measures must be taken to protect them and – as always when implementing data protection – care must be taken to comply with the principles of data protection. Employees involved in the operation and use of AI systems must also be well trained in AI skills. Great opportunities for change are seen in the financial sector as a result of GenAI. It remains important to handle AI responsibly and to earn the trust of users.
About the interviewer
Kristin Pagnia has been with atriga GmbH since the company was founded in 2003 and works there as a lawyer, Assistant Head of Legal Services and Data Protection Officer. Kristin Pagnia studied law at the Johann-Wolfgang-Goethe University in Frankfurt and has been working intensively in the field of data protection for many years. She is a member of the data protection working groups of the BDIU and the GDD. Ms. Pagnia has also been a “Company Data Protection Officer IHK” since 2019.