Following the updates to the AI Act, it’s essential for banks to ensure their AI systems comply with these new regulations. TapiX is the industry-leading API enabling banks and fintechs to build solutions driven by enriched transaction data. Banks can use this data for their machine learning models, feeding AI systems, credit scoring, customer behavior anaylsis or building custom features such as AI chatbots or intelligent contextual advisors in internal systems or customer applications. Since we are part of the AI supply chain, it made sense for us to put together these legislative insights for you. This guide breaks down the process into manageable steps, making it straightforward for digital banking experts to follow and ensure compliance.
The EU AI Act is a significant legislative initiative designed to regulate artificial intelligence systems within the European Union. Introduced by the European Commission, this act aims to ensure that AI technologies are developed and utilized in a manner consistent with the EU's values and regulatory standards. It is expected to take effect in May 2024, with a transition period of at least two years for full implementation.
In addition to setting safety standards for AI systems, theAI Act seeks to protect users from "bad data." A major concern is the potential for AI algorithms to spread misinformation, especially with the rise of AI-generated content. This includes addressing "hallucinations," where AI systems generate or perceive incorrect data. The AI Act aims to mitigate these risks through requirements for transparency and accountability. Does it apply to your bank? Let’s find out
TIP: How does the EU AI Act Affects Banking
First, identify if your system qualifies as AI under the AI Act:
Check for Algorithm Use: Determine if your system processes inputs (like customer transaction data) to produce outputs (like credit scores, fraud alerts, or loan approvals) using algorithms.
Automation Check: Ensure the process is automated, meaning it's done by machines, not humans. Examples include automated loan approvals and fraud detection systems.
Objective Achievement: Verify the system is designed to achieve specific objectives, such as identifying fraudulent transactions or recommending investment products.
Data-Driven Adjustments: Confirm the system adapts based on data inputs, like learning from transaction patterns to improve fraud detection.
TIP: How are banks using AI in 2024?
AI opportunity map based on selected use cases along the value chain in banking and payments (not exhaustive
Make sure your AI system fits the definition in Article 12, which includes producing outputs that can influence decisions or environments, such as approving a loan, flagging a transaction, or providing investment advice.
Before diving into compliance actions, check if your AI system qualifies for any exceptions under the AI Act. Leveraging these exceptions can reduce regulatory burdens and streamline your compliance process. Here’s a quick overview of the key exceptions and how they might apply to your banking applications.
Determine if your AI system does not significantly influence decision outcomes. For instance, it might only provide risk assessments that are reviewed by a human officer before final decisions are made.
Assess if your AI system operates independently without improving the results of a human activity. For example, automated trading systems that don't require human intervention.
Check if the AI system is still in the research, testing, or development phase and not yet launched for public use.
Verify if your AI components are available as open-source to the public, which might be the case if your bank contributes to or uses open-source AI tools for fraud detection.
Identify if your AI system is used exclusively for military or national security purposes. This is less likely for standard banking applications but could apply to certain cybersecurity measures.
Determine if the system is used for specific law enforcement purposes like biometric identification for regulatory compliance.
Assess if the AI system is used for public security or protecting critical infrastructure, such as ensuring the security of the financial system.
Check if the system is used for detecting financial fraud or managing systemic risks, as these are exempt from some high-risk requirements.
If your AI system falls under the AI Act without exceptions, it's essential to follow specific compliance actions. This step ensures that your bank's AI systems meet regulatory standards, avoiding penalties and maintaining customer trust. Here’s a concise guide to the necessary actions for compliance.
Identify the risk level of your AI system:
Unacceptable Risk: Includes social-scoring systems, manipulative techniques, and biometric categorization, which are generally not used in banking.
High-Risk Systems: Such as those handling biometric data for identity verification, credit scoring, or fraud detection.
Systemically Important General-Purpose Systems: Requires adherence to strict guidelines, relevant for banks using AI in core systems impacting the entire financial market.
Low-Risk Systems: These come with general guidelines but still require careful consideration.
Prepare for the conformity assessment and obtain CE marking to show compliance. Follow guidelines for high-risk systems, including documentation and registration.
Establish a robust risk-management and quality-management system using models like ISO/IEC 42001:2023 to ensure the AI systems used for credit scoring and fraud detection are reliable and unbiased.
Regularly check for and address biases in your AI system’s outputs. For example, ensure credit scoring models do not unfairly disadvantage certain demographic groups. Ensure data quality and governance by adhering to standards like ISO 8000. Utilize open-source toolkits such as AI Fairness 360 to evaluate bias across a variety of applications.
Keep detailed records of how your AI system works and the decisions it makes. Ensure transparency by informing customers when they are interacting with an AI system, such as automated loan approval notifications.
Implement mechanisms for human oversight to ensure AI systems are functioning correctly and addressing impacts over time. For example, have human officers review flagged transactions from fraud detection systems.
Ensure your AI system is accurate and robust, following standards like ISO/IEC TS 4213 for classification tasks. Regularly test the AI system to maintain high performance, particularly in areas like fraud detection and risk assessment. Use guidelines like ISO/IEC TR 24029-1 for assessing the robustness of neural networks.
Protect the AI system from cyber threats, including data poisoning and adversarial attacks. Follow cybersecurity standards like ISO/IEC27001:2022 and implement measures to secure customer data and transaction information. Utilize open-source toolkits such as the Adversarial Robustness Toolbox to defend against AI-specific attacks.
Appoint an authorized representative to handle communications with the AI Office and national regulators (Article 82).
Register high-risk AI systems in the newly created database (Article 131).
Maintain detailed technical documentation and records(Articles 71, 72, 132-134). Ensure transparency by informing users about AI interactions.
Implement mechanisms for human oversight to oversee AI system operations and impacts (Articles 66, 73).
Collect protected attributes to evaluate bias, even within regulatory sandboxes (Articles 70, 138-141).
Look into simplified compliance options for small and medium banks. Utilize regulatory sandboxes to test and develop AI systems in a controlled environment without full compliance requirements initially.
Understand potential fines for non-compliance and the importance of adhering to the AI Act requirements. For instance, non-compliance could result in significant financial penalties. (Article 99):
- Engaging in prohibited practices: 35 million EUR or 7 % of its total worldwide annual turnover for the preceding financial year, whichever is higher
- Misbehaving the development or use of high-risk systems: 15 million EUR or, if the offender is an undertaking, up to 3 % of its total worldwide annual turnover for the preceding financial year, whichever is higher.
Ensure compliance by the end of 2024 for prohibited uses and by mid-2026 for high-risk systems to avoid penalties. Start early to ensure all systems and processes are fully compliant.
Continuously monitor the performance of your AI system to ensure it meets the desired outcomes and adapts to any changes or new requirements.
Begin with robust models to effectively handle complex tasks. Once you have a clear understanding of the specific problems you're addressing, you can fine-tune and optimize the models for better efficiency and cost-effectiveness.
Ensure that the AI system does not have access to more data or permissions than a typical user. This prevents potential misuse and aligns the AI's capabilities with what is acceptable for users.
For important and sensitive tasks, involve human oversight to review and validate AI decisions. This helps in mitigating risks associated with errors and ensures more accurate outcomes.
Recognize that AI systems are not perfect and will make mistakes. Implement mechanisms to identify and correct these errors, and continuously improve the system based on feedback and observed inaccuracies.
Compliance side
Technical side
By following this comprehensive guide, digital banking experts can systematically assess and ensure their AI systems comply with the AI Act requirements, ensuring legal compliance and maintaining trust with their customers. This proactive approach will help banks avoid penalties, enhance their AI systems' performance, and build a reputation for responsibility and transparency in their AI use.
About author
Ondřej Slivka
Senior insider