EU AI Act sets out AI regulation in the EU – act now

Responsible AI: consultancy on the responsible use of AI in companies

Many companies have already introduced artificial intelligence into their day-to-day business or launched their own AI-based products. However, AI technology is not without its risks. In light of this, the European Council, which comprises the leaders of the 27 EU member states, has approved the EU AI Act to regulate the use of AI in the European Union. For companies, this represents both an obligation and an opportunity to develop and introduce AI systems in line with the principles of responsible AI.

What is the EU AI Act?

The AI legislation adopted by the EU is the world’s first comprehensive, mandatory regulation of AI. The EU AI Act requires providers, users and operators of artificial intelligence to uphold fundamental rights and prohibit the misuse of AI. It follows a risk-based approach: the more significant the risk a given use of AI presents to fundamental rights, the stricter the requirements imposed on it.

With our well-founded responsible AI consultancy services, we can help you to foster trust in the use of AI in your company and ensure legally compliant introduction and implementation of AI systems.

Book a consultation now

 

EU AI Act: an overview of the risk classes

The EU AI Act categorizes AI systems into different risk classes, which are subject to different requirements

Unacceptable risk

This includes AI systems that are designed to manipulate people’s behavior in a specific way, exploit the weaknesses of certain groups of people, divide people into groups and assess them based on their social behavior (known as “social scoring”) or predict the risk of a person committing a crime.

These AI systems are prohibited in the EU – save for very limited exceptions.

High risk

This includes AI applications related to critical infrastructure, HR management, healthcare and banking – as well as autonomous driving.

Under the EU AI Act, these AI systems will have to meet comprehensive reporting, documentation, monitoring and quality requirements in the future in order to secure marketing authorization in the EU.

Low or no risk

In the future, AI systems that present a low risk – such as chatbots – will be subject to certain transparency requirements. For example, AI-generated and AI-edited content such as text, images, audio and video will have to be clearly labeled as being artificially generated.

AI systems that do not present any risk – such as automatic spelling and grammar checkers – will not be subject to any statutory requirements.

General-purpose AI systems (GPAI systems), such as GPT-4, will be subject to certain documentation and transparency requirements, regardless of their risk class.

 

When will the EU AI Act come into effect for German companies?

The EU AI Act entered into force on August 1, 2024, and will now be gradually implemented at national level.

The regulation provides for transition periods of six to 36 months, after which time companies must meet the relevant requirements. Failure to do so can result in significant fines and liability risks.

  • AI systems classified as prohibited by the EU AI Act must be removed from circulation no later than six months after the EU AI Act’s entry into force – i.e. by February 2, 2025. 
  • The requirements for general-purpose AI systems (GPAI systems) will apply from August 2, 2025. 
  • The requirements for high-risk commercial AI systems will apply from August 2, 2026.
Which companies are affected by the EU AI Act?

All companies that provide, operate or distribute AI systems or products – even if they are established in a third country.

How hard will these regulations affect your company? Contact us to arrange an assessment.

Book a consultation now

EU AI Act: a challenge and an opportunity

For many companies, the EU AI Act represents a major challenge, especially due to the short transition periods. However, this AI regulation also offers an opportunity to foster public trust in these new technologies through responsible use of artificial intelligence, in turn increasing acceptance of innovative AI solutions. Our team of AI experts will help you to develop and implement a customized AI strategy.

 

Our responsible AI consultancy

We offer six different consultancy packages related to responsible AI and the EU AI Act:

Would you like to find out more about the EU AI Act and the potential uses of AI? In our onboarding workshop, we’ll shed light on the most important aspects of the new AI legislation and provide an overview of the new regulations. We’ll also discuss specific use cases and applications with you to demonstrate how you can implement the new requirements most effectively in your company.

Scope: 

  • Introduction to the EU AI Act
  • Key regulations and requirements
  • Discussion of use cases and applications
  • Q&A and networking

It’s crucial that AI applications are developed and operated safely and in line with requirements. We’ll assist you with implementing a robust AI governance strategy. This strategy must correspond to the requirements of the EU AI Act, ISO 42001 and ISO 23894, along with other standards and regulations. It also needs to interface with your company’s existing management systems – especially regarding IT security and data protection.

Scope: 

  • Overview of current AI safety practices and relevant requirements of the EU AI Act
  • Creation of an inventory of your AI systems
  • Review of the risk assessment and the risk management process
  • Review of your company’s current measures in line with the requirements of standards 
  • Production of a report, including all recommendations for action 
  • Optional: Support with implementation of improvement measures

Implementing an AI strategy that complies with the EU AI Act is crucial for companies seeking to become more competitive and develop innovative business models. We’ll help you to develop and implement a customized AI strategy.

Scope: 

  • Business models
    • Analysis of existing business models and identification of potential optimizations with AI
    • Development of new business models based on AI technologies
    • Production of an AI Business Model Canvas for strategic planning 
  • Use cases 
    • Identification and prioritization of use cases for AI
    • Creation of detailed use cases to assess economic value 
    • Assessment of use cases regarding their compliance with the EU AI Act, including AI risk assessment, ethical AI consultancy and review of AI regulations
  • Roadmap
    • Development of a roadmap for AI implementation 
    • Definition of milestones and timescales
    • Continuous monitoring and adjustment of the roadmap based on progress and new insights
    • Optional: Implementation of the roadmap by our experts
  • IT and organizational structure 
    • Analysis of the existing IT and organizational structure (“as-is” condition) and identification of required changes
    • Development of suitable IT and organizational structures to support the AI strategy (“to-be” condition)
    • Training and development for employees to foster acceptance and promote understanding of AI

The requirements of the EU AI Act don’t just concern companies and their processes: instead, every single product must be classified regarding the specific risk it poses to society and the people using it. This risk class determines the specific product’s protection requirements.

Scope: 

  • Overview of the risk classes for AI systems 
  • Support with assessing and classifying a use case
    • Provision of a risk questionnaire
    • Discussion of structure and approach o Assistance with the assessment
  • Company-specific recommendations for action
    • Recommendations for action for all classified AI systems

Working together with you, we’ll identify operational gaps in your AI projects and operational AI systems. We achieve this by conducting a technical audit of your AI systems. This involves looking for known vulnerabilities and misconfigurations, including those listed in the OWASP Top 10 for Large Language Models (LLMs) and/or Machine Learning (ML). We combine vulnerability scans, which automatically identify potential security flaws, with penetration tests so that we can conduct targeted simulations of real attacks and examine all aspects of your AI models’ security.

  • Technical auditing of applications, with reference to regulatory requirements and current vulnerabilities
    • Targeted vulnerability analyses uncover potential technical security flaws
    • Penetration testing supplements this simulated examination of an application’s vulnerabilities, which makes it possible to validate them from an attacker’s perspective. 
  • Provision of a comprehensive audit report 
    • Analysis of identified vulnerabilities, risks and security flaws
    • Specific recommendations for action to resolve problems and improve safety and compliance standards 
    • We’ll propose both technical and strategic measures in order to ensure the long-term integrity and safety of the examined systems.

Our consultancy services for generative AI are designed to meet the complex requirements of the EU AI Act regarding the management of generative AI systems. Drawing on our expertise, we’ll help your organization to successfully integrate generative AI while maintaining the highest standards of performance and safety.

Scope:

  • Development of robust workflows 
    We’ll sketch out customized workflows to meet your organization’s specific requirements and facilitate seamless integration of generative AI. 
  • Realization of GenAI use cases 
    We’ll help you to implement and integrate GenAI applications. 
  • Establishment of test processes 
    We’ll conduct comprehensive tests to ensure the reliability and efficiency of your generative AI solutions. 
  • Performance optimization 
    Relying on our proven methods and techniques, we’ll maximize the performance of your AI applications. 
  • Risk minimization 
    We’ll identify potential risks associated with the implementation of generative AI and advise you on ways to minimize them in pursuit of seamless operation.

Our expertise in AI implementation and consultancy

As an experienced IT service provider, we’ve developed and implemented a number of standardized AI solutions to date, including AI-as-a-Service (AIaaS) solutions and our smart assistant. We consider statutory requirements such as the EU AI Act and engage with future requirements at an early stage. We have already had a specific use case for a smart assistant positively assessed by TÜV SÜD and Calvin Risk – the AI Maintenance Assistant.

With our cross-sector expertise, we can offer you customized consultancy services, precisely tailored to your company’s needs. Our responsible AI consultancy comprises all aspects of responsible AI, from risk assessment to ethical implementation.

 

Responsible AI: consultation request

Would you like to learn more about responsible AI? Maybe you’d like a non-binding consultation? Contact us today so that we can shape the future of your AI applications.

I am interested in the following consulting package(s):
captcha