„Innovations optimize your receivables management. Subscribe to the atriga newsletter and be the first to know!“
EU AI Regulation: What providers and operators of AI systems can expect since August
The EU AI Act, the world’s first comprehensive regulation of artificial intelligence, has been in force since August 2024.
Following the first stage, which banned particularly high-risk systems, new obligations will come into force on 2 August 2025, particularly for providers of large general-purpose AI models and for companies that develop or use such systems.
Kristin Pagnia, lawyer and internal data protection officer at atriga, explains what this means in practice in our interview.
Read here the first part of our interview series on the EU AI Act, which has already been published, to find out what the financial industry needs to consider when using GenAI-based chatbots.
Ms Pagnia, what exactly happened on 2 August 2025?
On this date, the second implementation stage of the EU AI Act, also known as the AI Regulation, came into force. It primarily affects two areas: firstly, the so-called governance rules, i.e. standards for the safe and responsible use of AI. Secondly, new obligations for providers of so-called “general purpose AI” models, i.e. large base models such as those used in language systems. However, these regulations affect not only the developers of these models, but also anyone who incorporates such models into their own AI systems and processes them further, known as downstream providers.
Many companies already use AI, often without knowing whether they are considered providers. What does the legal text say about this?
According to Article 3 of the EU AI Act, a provider is a natural or legal person, public authority, agency or other body that develops an AI system or AI model for general purposes or has an AI system or AI model for general purposes developed and places it on the market or puts it into service under its own name or trademark, regardless of whether this is done for remuneration or free of charge.
Important: Even those who integrate an existing AI model, such as a large language model (LLM) from OpenAI or Microsoft, into their own application become providers themselves under certain conditions. These companies are then subject to their own obligations, such as transparency or technical documentation. Whether you are affected must be examined on a case-by-case basis. Help with this can be found at the newly created AI Service Desk of the Federal Network Agency. With this advisory service, available at www.bundesnetzagentur.de/DE/Fachthemen/Digitales/KI/2_Risiko/kompass/start.html, companies can check whether and to what extent the European AI Regulation applies to them.
Kristin Pagnia, lawyer and internal data protection officer at atriga: “According to Article 3 of the EU AI Act, a provider is a natural or legal person who develops an AI system or AI model for general purposes or has an AI system or AI model for general purposes developed and places it on the market or puts it into service under their own name or brand.”

And who is an operator within the meaning of the Regulation?
An operator is someone who uses an AI system for business purposes on their own responsibility. For example, in customer communication or for process automation. A provider can also be an operator if they use their own system. Many companies today take on precisely this dual role.
A term that is now being used more frequently is “general-purpose AI model” or “GPAI models (general-purpose AI)”. What exactly does this mean?
These are known as foundation models, i.e. basic models that can be used for a wide range of different tasks. A general-purpose model must meet three criteria: it must be widely applicable, be able to perform a variety of complex tasks competently, and be integrable into a wide range of applications. Large language models such as ChatGPT from OpenAI or the language model from Microsoft Azure used by the atriga voice and chatbot “FYN” generally meet these criteria.
What does this mean in concrete terms for companies that use or process such models?
Providers of these models, i.e. the original developers, must ensure, among other things, that their systems are transparently documented. This includes a public description of how the model was trained, what data was used and how copyrights were observed. In addition, technical documentation in accordance with Annex XI of the Regulation is mandatory. Companies that integrate these models into their own systems are dependent on this information being provided. For their part, they must ensure that all obligations under the AI Regulation are complied with.
Are there differences between models with and without systemic risk?
Yes, the legislator makes a clear distinction here. Models with systemic risk – i.e. those that are particularly powerful and have the potential to exert a major influence on society – are subject to stricter requirements. These include reporting obligations, risk assessments and special transparency measures. Whether a model is classified as systemic depends on several criteria, such as its reach, capabilities and potential influence on democratic processes or fundamental rights. Article 51 and Article 3(64) of the EU AI Act provide guidance on this.
A practical example: A company uses a chatbot system based on a large language model. What obligations does this entail?
First, it must be ensured that the user recognises that they are communicating with AI. This is a fundamental transparency requirement. The atriga chatbot, for example, introduces itself with the words “…I am FYN, the virtual assistant of atriga.” In addition, it must be documented on which model the system is based, how it was integrated and which protective mechanisms are in place. If there is a significant further development or a change of purpose, the company itself becomes the provider and must meet additional requirements, such as technical documentation or risk assessment.
In addition to technical obligations, there are also organisational requirements. What does this mean for employees?
Training has been mandatory since February 2025. Companies must ensure that everyone who works with AI systems is sufficiently qualified. This applies to technical aspects as well as ethical, legal and security-related issues. AI expertise must not be a black box – this is a key message of the regulation.
What role does data protection play in all this?
A very big one. This is because AI systems often process personal data, either directly or indirectly. All the provisions of the General Data Protection Regulation (GDPR) still apply. The EU AI Act supplements this perspective with specific requirements for transparency, purpose limitation and fairness. Particularly when using large language models, care must be taken to ensure that no copyright-protected or sensitive content is processed or reproduced in an uncontrolled manner.
Sounds like a lot of effort. Where do you think companies should start?
The first step is always to take stock: Which systems are in use? Which models are used? What role does your company play: provider, operator or both? Then it is important to define clear responsibilities, set up documentation processes and adapt technical and organisational measures to the regulation.
It is important not to rush into action, but to proceed in a structured manner and with a sound understanding of the regulation. This will be an ongoing task for companies, particularly in conjunction with data protection, copyright and IT security.
Dear Ms Pagnia, thank you very much for your insightful comments and valuable advice for our readers.
About the interviewer
Kristin Pagnia has been with atriga GmbH since the company was founded in 2003 and works there as a lawyer, Assistant Head of Legal Services and Data Protection Officer. Kristin Pagnia studied law at the Johann-Wolfgang-Goethe University in Frankfurt and has been working intensively in the field of data protection for many years. She is a member of the data protection working groups of the BDIU and the GDD. Ms. Pagnia has also been a “Company Data Protection Officer IHK” since 2019.
The Federal Network Agency’s AI Service Desk supports companies, authorities and organisations in implementing the AI Regulation in Germany:
www.bundesnetzagentur.de/DE/Fachthemen/Digitales/KI/start_ki.html

