Putting AI on a leash: What does the EU AI Act mean for companies

The EU AI Act has been adopted and will gradually come into force over the next two years. Many companies are already testing generative AI, using image generation tools or chatbots and other AI applications. New laws now apply to companies, public authorities and private individuals. Is the AI software used so far still legal? What penalties can companies expect? What we know so far.
Putting AI on a leash: What does the EU AI Act mean for companies

The EU AI Act has been passed - which AI applications in companies are still legal?

Overview: The EU AI Act and many misunderstandings

On May 21, 2024, the Council of the European Union adopted the AI Regulation (AI Act). With its publication in the EU Official Journal, the deadlines begin to run. However, there are many unanswered questions regarding the practical implementation of the legislative package that will regulate AI applications in the European Union.

For example: Many companies use ChatGPT or other LLMs for this purpose – what do they need to consider now? What can they demand from model providers such as OpenAI? What is the legal situation if they have problems or deliver incorrect results?

Let’s clear up the big misunderstandings right away:

Misconception 1: Every area of AI will be regulated

The EU AI Act will only regulate part of artificial intelligence; a large proportion of applications will not be covered by the new law. Why? The AI Act does not look at the technology, but at the area of application. Specifically, it asks: In which area of application is there a risk for us as a society?

Let’s take the example of an AI application that controls a simple recommendation algorithm like Netflix. This is not a social risk. The worst-case scenario is that the AI algorithm suggests “Pretty Woman” instead of “Star Wars” 🙂 This is not a social risk that the AI law needs to regulate.

Misconception 2: The EU AI Act is the first law to regulate AI

This is not true, because we already have laws that implicitly regulate some areas. If I plan something illegal, such as building a bomb, and have an LLM explain to me how to do it, it’s just as illegal as if I googled it beforehand.

Misconception 3: LLMs such as ChatGPT and general purpose AIs also fall under the same rules

LLMs, i.e. Large Language Models and other General Purpose AIs (including the popular and best-known LLM ChatGPT) are subject to the same rules as other AI applications. Large Language Models and GeP-AIs (short for General Purpose AIs, as AI applications with a general purpose) are also regulated, but they are subject to their own set of rules. This does not fit in with the logic of “the application is regulated, not the product”; the legislator has deliberately created an exception here.

There are certain requirements for these GP-AI models, i.e. the general AI models:

  • Requirements for the quality of the data, for example, to avoid producing bias, illegal content and hate comments.
  • Providers are obliged to check their products to ensure that no discriminatory/illegal content is output – and must adapt their models if this is the case.
  • There must therefore also be reporting options for users who wish to report such content.
  • Mandatory labeling: In the case of chatbots, there will be an obligation to label that you are talking to an AI chatbot and not a human.
  • There will also be an extra category of particularly effective models that have to meet particularly high extra requirements. These are, for example, “sustainability in design” or cyber security requirements. The sticking point here is the definition of when an AI system is a particularly powerful model.

New instances: Who is responsible?

Legal enforcement for the GP-AI models, i.e. General Purpose AI models, is the responsibility of the EU AI Office (also known as the European AI Office), which has yet to be established. There will be national authorities for all other parts. In Germany, for example, it is not yet clear who will do this.
AI software falls into the category of product safety laws, so various standardization organizations at European level and national standardization organizations, in Germany e.g. the TĂśV, the BSI or the state data protection authorities, are obvious.

The AI Office is located in the European Commission as a center for AI expertise. It will be very important for law enforcement, for supporting various member states and companies. It will play a key role as a sparring partner and supervisory body for all stakeholders, as it will coordinate the different member states and the different national authorities and carry out law enforcement for these general purpose AI models.

The EU AI Act also defines a chain of responsibility: this means that if a company builds its product on an AI product from another company, it has the right to obtain the information it needs to be legally compliant.

Penalties and law enforcement

Percentage penalties are provided for, depending on the severity and nature of the breach. The EU AI Act provides for strict penalties for non-compliance to ensure compliance:

  • Companies that break the rules can be fined up to €30 million or 6% of their annual global turnover, whichever is higher.
  • Less serious violations can result in fines of up to €20 million or 4% of annual turnover.
  • Minor infringements can be punished with fines of up to 10 million euros or 2% of turnover.

In addition, the law allows for corrective measures, including mandatory adjustments or discontinuation of the offending AI systems. These penalties are intended to enforce accountability and compliance with ethical standards in the development and use of AI.

Conclusion on the EU AI Act

The EU AI Act aims to reconcile the regulation of dangerous AI and the promotion of innovation. It categorizes AI systems according to risk levels, with strict rules for high-risk applications such as biometric surveillance.
Data quality, transparency, accountability and human oversight are important requirements. To avoid overregulation, the law provides flexibility for research and companies. It promotes cooperation between EU Member States for a harmonized approach. The aim is to protect citizens while promoting technological progress and economic growth.

However, the EU must step on the gas, as many issues remain unresolved. It must quickly create a legal framework and bodies such as the AI Office and the independent panel of experts.

If you would like to receive regular updates on the topic of “AI in marketing” and the EU AI Act, register for our newsletter.

Do you have questions about AI?

Which are the best tools for your use case? Which tools are legal? What are the most important things to consider? We'll tell you!

Ask our AI Experts!

Updates straight to your inbox

Would you like to receive regular updates on the topic of "AI in marketing" and the EU AI Act? Sign up for our newsletter directly below.