The EU AI Act

Compliance and transformation

The EU AI Act, which came into force on 1 August 2024, aims to foster the responsible development and use of artificial intelligence (AI). The legislation is intended to ensure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly—and overseen by people rather than by automation. 

The question is—is the new regulation a challenge for compliance or a catalyst for innovation?

The Act establishes a model for the regulation of AI and may set a baseline for global regulations, in a similar way General Data Protection Regulation (GDPR) has done for personal data regulation.

Though the regulation may be seen as a challenge for compliance, the AI Act aims to instil confidence in consumers through reliability, transparency and fairness—as well as being a catalyst for innovation in AI adoption.

Connect with us

Together we work collaboratively to unlock and preserve the value of AI.

Who is affected by the AI Act?

The AI Act applies to private organisations and public authorities and, according to Article 2 of the AI Act, affects five main groups of stakeholders: providers, deployers, importers and distributors, product manufacturers and affected persons. The AI Act expressly does not apply to private individuals if they use AI systems for purely private purposes. Additionally, there are further exemptions for systems used for the purpose of scientific research and development, the military, defence and national security. 

Organisations falling under the EU AI Act are expected to comply with its requirements by 2026. In some cases, however, such as a ban on prohibited practices and an obligation to ensure staff AI literacy, the term is significantly shorter and is effective from February 2025. There are significant financial penalties for non-compliance with the AI Act. Fines for non-compliance with the EU AI Act are up to €35 million or up to 7% of annual turnover. PwC’s approach, however, is not focused on mere compliance, but is on using the Act instrumentally as a tool to drive business transformation.

How can PwC help organisations use the EU AI Act to drive innovation?

  1. PwC can help you to develop an AI strategy that allows your organisation to align on what your future with AI should look like and how to achieve it. 
  2. We can also help you nail down your priorities and capabilities and determine which AI use cases you might invest in.
  3. PwC offers bespoke AI solutions, formulated by professionals from our Risk and Legal Teams working in tandem.
  1. PwC can help you not only remain compliant, but also use the AI Act to drive innovation. 
  2. Our human-led, tech-powered approach can assist you in building AI responsibly and bring big ideas to life across all stages of AI adoption. 
  3. PwC can help you assess legal, safety and ethical impacts of AI systems or vendors, identify risks and mitigation strategies, facilitate vendor selection and engagement and shape contractual, technical and organisational safeguards.
  1. Appropriate governance is essential for organisations to benefit from AI technology responsibly. 
  2. PwC can help you build structured AI governance. 
  3. We can work in partnership with organisations to build AI systems that develop, procure and use quality and legal standards through clear processes, responsibilities and tools to manage associated AI risks.
  1. It is imperative that organisations have a clear understanding of the synergies, not just with the AI Act, but with other AI-related regulations and standards. 
  2. PwC can help organisations understand legislative and regulatory frameworks governing AI, such as the Digital Operational Resilience Act (DORA), GDPR and NIS2. 
  3. PwC can also assist organisations in understanding the standards being developed around AI, which can signify products of distinguished quality, e.g. ISO-42001, NIST AI RMF.
  1. PwC can help organisations perform AI risk assessments, including the identification and classification of AI systems, gap analysis and the drafting of compliance action plans. 
  2. Maturity assessments also enable organisations to identify where they are on their journeys towards the responsible and safe use of AI as well as AI Act compliance.
  1. Bespoke solutions developed by PwC professionals can drive the smooth implementation of governance and compliance systems for the EU AI Act. 
  2. Our team can help set up the necessary governance and compliance systems, and establish best practices, policies and controls throughout the AI lifecycle, from development to operations and monitoring.
  3. PwC can provide technical assessment and validation of algorithms and AI models—whether internally developed or vendor-supplied—evaluating their fit-for-purpose, accuracy, robustness, relevance and explainability.
  1. Upskilling employees is a key part of adopting a responsible approach to AI. 
  2. Providing knowledge about AI, its risks and ways to mitigate them in practice is fundamentally important for moving from compliance to innovation in AI. 
  3. PwC can help organisations upskill through workshops tailored to their specific needs.
  1. PwC can assist clients on optimising and efficiently using AI. 
  2. PwC can also help devise sustainable approaches towards AI through:
  • Providing practical examples of using AI to optimise energy usage 
  • Exploring broader sustainability benefits of AI.

AI Act compliance

Our interdisciplinary Risk and Legal Teams are able to help your organisation in the whole process related to AI Act compliance:

  1. Compliance Strategy: Assist in defining and developing AI Act compliance strategies.
  2. Assessment: Assess the current stage of compliance with AI Act and other relevant regulations and standards, for instance: the Digital Operational Resilience Act (DORA), the Network and Information Systems Directive 2022 (NIS2), a directive for a common high level of cybersecurity across the Union, and the General Data Protection Regulation (GDPR).
  3. Implementation: Assist in the design of policies, procedures and practices required by the regulation as a part of an AI Governance Framework, taking into account market standards such as the National Institute of Standards and Technology (US Department of Commerce) AI Risk Management Framework (NIST AI RMF) or the International Organisation for Standardisation (ISO) framework ISO-42001.
  4. Technical model assessment: At PwC, we provide a comprehensive framework for Algorithm and AI Validation Services, focusing on the technical assurance of AI systems. Our approach evaluates key factors such as sensitivity, specificity and accuracy while addressing contextual considerations and enabling continuous monitoring. This ensures compliance with the EU AI Act, reduces risks associated with data and mitigates potential biases in AI outcomes.

Our teams are able to help your organisation with the following areas: 

  • Algorithm selection and design: Assess the suitability and alignment of algorithms with business objectives, ensuring ethical and responsible AI design practices.

  • Data collection: Verifying the relevance, sufficiency and representativeness of collected data to support robust model development and reduce biases.

  • Data quality and preprocessing: Ensuring data accuracy, completeness and consistency through rigorous preprocessing techniques, while identifying and mitigating potential data issues.

  • Model training and tuning: Evaluating the effectiveness of training processes, hyperparameter optimisation and ensuring that models generalise well across diverse scenarios.

  • Model validation and performance metrics assessment: Testing model accuracy, robustness, sensitivity and specificity against real-world conditions, using industry-standard and bespoke performance metrics.

  • Monitoring and continuous improvement: Reviewing model deployment processes, scalability and integration into business workflows, with a focus on operational resilience and governance of Machine Learning Operations (MLOps).

  • Explainability and transparency: Assessing the interpretability of AI models to enhance trust and understanding among stakeholders, facilitating compliance with regulatory requirements.

  • Risk mitigation and bias detection: Identifying and mitigating risks, including algorithmic bias, to promote fairness and equity in AI outcomes.

We understand that the AI Act should be considered in relation with other relevant regulations and standards, and are therefore able to capture synergies between different regulations and address compliance in a comprehensive way. 

ai act

Responsible AI practice

Compliance with the AI Act is a key element of the responsible AI practice which should be implemented and followed by each organisation that seeks to use AI responsibly and build trust in AI. Our teams are able to help your organisation in the following elements:

  1. AI strategy: Define responsible AI strategies, covering elements related to compliance with regulations and sector standards.
  2. Management: Manage the implementation of end-to-end AI governance frameworks covering the entire AI life cycle.
  3. Assess and mitigate: Perform AI risk assessments and design risk mitigating actions, which can include the identification and classification of AI systems and the organisation role in the AI value chain in accordance with AI Act taxonomy.
  4. Monitor and report: Define rules and practices related to the monitoring and reporting of AI within organisations with respect to regulatory requirements.
  5. People and culture: Conduct training and workshops to build organisational awareness and upskill people about AI opportunities, risk and appropriate practices related to the technology’s use as a business transformation driver.
ai act

How can your organisation use a responsible approach to build trust in AI and gain the full benefits of being compliant?

ai act
  1. Leverage advanced innovation strategies to showcase your expertise. 
  2. Meet increasingly rigorous client expectations across markets
  3. Connect with previously untapped customer segments.
  1. Achieve precision before scaling by deploying customisable and reusable tools. 
  2. Drive strategic innovation, building on proven practices.

 

  1. Secure the sustainability of your AI initiatives by aligning with upcoming regulations. 
  2. Prepare your teams with essential skills, establish robust procedures. 
  3. Foster a forward-thinking culture to adapt seamlessly to future regulatory landscapes.

 

  1. Stay compliant with local and EU-wide regulations.
  2. Embedding risk management throughout AI lifecycle stages. 
  3. Cultivate transparency and quality from data sourcing to model deployment, ensuring visibility and accountability.

Connect with us

Together we work collaboratively to unlock and preserve the value of AI.

Contact us

Gábor Balázs

Gábor Balázs

Partner, PwC Hungary

Angel Bangachev

Angel Bangachev

Director, Legal Business Solutions | Attorney-at-Law, PwC Bulgaria

Tel: +359 894 333 237

Aleksandra Bańkowska

Aleksandra Bańkowska

PwC Legal Partner, PwC Poland

Mircea Bozga

Mircea Bozga

Partner, SEE Risk Assurance Leader, PwC Romania

Bruno Curcija

Bruno Curcija

Director, Risk Assurance Services, PwC Croatia

Tomáš Fiala

Tomáš Fiala

Document automation services, PwC Czech Republic

Tel: +420 724 326 846

Dzenet Garibovic

Dzenet Garibovic

Senior Manager in Tax and legal services, PwC Croatia

Anja Gorenc

Anja Gorenc

Director, Assurance, PwC Slovenia

Janno Hermanson

Janno Hermanson

Partner, Engagement Leader, PwC Estonia

Tel: +372 5665 3509

Snezhana Ilieva

Snezhana Ilieva

Director, Technology, Data & Analytics, PwC Bulgaria

Antal Kerekes

Antal Kerekes

Partner, PwC Hungary

Petr Kincl

Petr Kincl

Managing partner, PwC Legal

Tel: +420 734 182 727

Marcin Makusak

Marcin Makusak

Partner, PwC Poland

Tel: +48 502 184 718

Patrik Meliš-Čuga

Patrik Meliš-Čuga

AI Strategy and Risk Leader, PwC Czech Republic

Tel: +420 734 692 573

Michal Pališin

Michal Pališin

Senior Attorney - Director, PwC Slovakia

Tel: +421 902 953 313

Anda Rojanschi

Anda Rojanschi

Partner, CEE Risk & Regulatory Leader, PwC Romania

Anda Rojanschi

Anda Rojanschi

CEE Risk & Regulatory Leader, Partner, PwC Central and Eastern Europe

Sanja Savič M.Sc.

Sanja Savič M.Sc.

Director, Legal, PwC Slovenia

Piotr Urban

Piotr Urban

PwC CEE Broader Assurance Services Leader, PwC Poland

Tel: +48 502 184 157

Zoltán Várszegi

Zoltán Várszegi

Law Firm Member, PwC Hungary

Sarmite  Zakovska

Sarmite Zakovska

Senior Associate, PwC Legal

Štefan Čupil

Štefan Čupil

Partner, Risk Assurance Leader, PwC Slovakia

Tel: +421 911 964 212

Follow us on social media