By embracing the potential of AI whilst diligently addressing ethical and governance considerations, organisations can harness the transformative power of AI to enhance their Governance, Risk, and Compliance (GRC) capabilities, ultimately driving greater organisational resilience and success.
In this article we look at the following:
- The Potential AI Use Cases in Governance, Risk, and Compliance
- The Fundamentals of AI Governance
- Ethical Considerations in AI Governance
- Accountability for the Oversight of AI
The Potential AI Use Cases in Governance, Risk, and Compliance
AI offers a wealth of opportunities to enhance GRC functions within organisations. Some key use cases include:
Risk Identification and Assessment: AI-powered systems can analyse vast amounts of data to identify emerging risks and trends, providing real-time risk intelligence to inform strategic decision-making.
Compliance Monitoring and Reporting: AI can automate the monitoring of regulatory changes, policies, and procedures, flagging potential breaches and generating compliance reports with greater speed and accuracy.
Fraud Detection and Prevention: AI models can detect anomalies and patterns indicative of fraudulent activity, enabling organisations to proactively mitigate financial and reputational risks.
Audit Automation: AI can be leveraged to automate certain audit tasks, such as sampling, testing, and documentation, freeing up human auditors to focus on more complex, value-added activities.
Predictive Analytics: By harnessing AI’s predictive capabilities, organisations can forecast potential scenarios, stress-test strategies, and make more informed, data-driven decisions.
The Fundamentals of AI Governance
There are many frameworks and tools available to support organisations in their AI governance journey. These include the ISO 42001 standard for managing AI,[1] the NIST AI Risk Management Framework[2] and the OECD Responsible AI Governance Framework for Boards.[3]
Effective AI governance requires a holistic approach, from understanding your organisation’s use of AI, the data involved, the business units operating AI systems, the management chain of responsibility, and ultimately the board. A comprehensive AI governance framework should address the following key pillars:
AI Strategic Alignment: Any decision to use AI systems should align with your organisation’s strategy and objectives, to ensure it is used to drive business performance and add value. Ensure your senior management and strategy teams have addressed the areas of your organisation that would benefit from AI, and identify the most effective systems to achieve your objectives. Support from your IT, Legal, Privacy, Commercial and Compliance Teams should ideally be obtained at this stage of the project, to avoid duplication and ensure systems are embedded into your organisation’s technology and data governance frameworks.
AI Risk Management: Identify, assess, and mitigate the potential risks associated with AI deployment, including algorithmic bias, data privacy, and compliance issues. Understanding the supply chain for AI, including the provider of the underlying model, the software solutions provider incorporating it into a product, and your own use of AI will help you to identify risks in using the system.
These are likely to include ESG risks (energy and precious mineral use in larger AI models), labour risks (for example, microtasked workers may be involved in producing training data sets for AI models), biases in underlying data sets, hallucinations or other malfunctions in outputs, and legal risks underpinning the original provider’s acquisition and use of data (including privacy and IP rights).
When procuring an AI system, try to understand the source of the underlying model, including any publicly available safety briefings the provider may publish. When deploying AI systems, understand where any data inputted into the system will go; will it be made available to your software provider or the provider of the underlying model?
Try to identify whether commercially-sensitive information will be shared with providers, and whether your organisation can make IP claims over any outputs from an AI system. Ensure that any personal data used in an AI system is processed under your existing legal bases and privacy policies. Be aware of any biases in the system outputs, and try to create a process for human oversight of decisions.
Remember that your organization will ultimately be responsible for the use of any outputs from the AI system. Finally, make the board aware of your use of AI, perhaps through risk registers. In some jurisdictions, such as the US and Australia, boards have been held personally liable for algorithmic biases in areas such as insurance and banking.
[1] ISO, Artificial Intelligence Management System, 42001, available online at: https://www.iso.org/standard/81230.html (last accessed June 2024).
[2] NIST AI Risk Management Framework, available online at: https://www.nist.gov/itl/ai-risk-management-framework (last accessed June 2024).
[3] OECD Responsible AI Governance Framework for Boards, available online at: https://oecd.ai/en/catalogue/tools/responsible-ai-governance-framework-for-boards (last accessed June 2024).
AI Ethics and Responsible Use: Develop and implement ethical principles and guidelines to ensure AI is used in a responsible, transparent, and accountable manner. AI ethics is not just about tackling bias, although testing for biases in underlying data sets and outputs is important. AI ethics also includes the rights of individuals whose data is used to train a system, or is inputted into an AI system, and who may be affected by the decisions made by an AI system.
Ensure that your legal, compliance and ethics teams have considered the implications of your use of data in AI systems, how the outputs from AI will impact individuals, and how your organisation can ensure human oversight of any AI-powered processes. Transparency notices for individuals, including where an AI chatbot is used for customer service, should be prepared by your compliance or legal team, and your privacy notice(s) may need to be updated. Check with your legal team for any specific legal requirements, for example under the GDPR or EU AI Act.
Finally, the use of AI in the workplace may affect employees. Where employees’ data is being processed by an AI system, ensure that HR are involved in decisions to use AI and that they understand how the system will operate, including the requirement for human oversight any the possibility of bias. If employees’ data is to be used with an AI system, your employee privacy notice should be updated and employees should be consulted.
Remember that the use of AI in an employment context is likely to be classified as “high risk” under the EU AI Act, which will involve additional conformity checks and assessments. In addition to using AI to process employee data, where employees may feel their jobs are at risk from the use of AI, ensure that your workforce are consulted on the use of AI by your organization where possible, to ensure that the use of AI is embraced by your team to enhance productivity.
Explain how AI systems will operate in practice, and consult with trade unions where possible. Recent consultations with the UK’s Trades Union Congress (TUC) have produced helpful guidelines in this area for businesses.[1]
AI Lifecycle Management: Establish robust processes for the design, development, deployment, and ongoing monitoring of AI systems throughout their lifecycle. One key element of an AI system is its capacity to ‘learn’ and develop.
Ensuring that your technical team are aware of developments in the system, its performance and any potential biases or hallucinations in outputs, will be key to maintaining effective AI.
Technical monitoring should include keeping your legal and compliance teams up to date with any significant developments in the systems, perhaps by agreeing a set of criteria or thresholds for when your IT team should reach out.
Information security and business continuity teams should also be updated on any changes to, or developments in, systems, to ensure security and avoid critical incidents. Remember that AI systems may update automatically, particularly when procured as part of a standard software as a service contract, such as automatic updates to Microsoft products.
AI Talent and Capabilities: Invest in building the necessary skills, expertise, and organisational capabilities to effectively govern and manage AI within the organisation.
[1] TUC, AI and Work: https://www.tuc.org.uk/artificial-intelligence-ai (last accessed June 2024).
Training is essential in maintaining AI systems and their governance programmes. Whether this is cross-functional training carried out in-house, or particular qualifications obtained by your team, always be ready to invest in the learning and development of your organization. If AI systems ‘learn,’ we must learn too!
Ethical Considerations in AI Governance
As organisations harness the power of AI, it is crucial to address the ethical implications and risks. Key ethical considerations include:
Algorithmic Bias: Ensure AI models are trained on diverse, representative data to mitigate the risk of perpetuating or amplifying societal biases. This may be a complex task, depending on whether your organization is using in-house, smaller models or buying access to a provider’s model.
If possible, consult with vendors to understand the data used to train and fine-tune their models. If AI systems are being developed in-house, ensure your technical and ethics or compliance teams work together to understand the nature of training datasets used, including identifying any unintentional biases. Consult with your IT team to understand whether the use of synthetic data may be required to remove biases, or whether fine-tuning of the model can account for any anomalies.
Constant monitoring of the outputs from AI systems should check for biases in results. Guidance from professional organisations, or public bodies such as the UK’s Alan Turing Institute can assist at the technical level. Ensure that the risk of bias is added to any risk registers and escalated to board level, reminding the board that some countries’ laws place responsibility on them for the effect of algorithmic decision making. This can include the board’s obligations under company law.
Transparency and Explainability: Strive for transparency in how AI systems make decisions and provide explanations that can be understood by both technical and non-technical stakeholders.
Understanding how an AI system provides outputs is crucial to managing its risks. Avoid the temptation to consider the system as a “black box” and obtain as much information as possible from vendors or suppliers.
Your organisation’s legal and compliance teams should check for any legislative transparency requirements, such as those contained in the EU AI Act. Whether required by law, transparency is an important means of ensuring your customers trust your services.
By providing clear transparency information on the system, including instances where AI is used by your organization to make decisions affecting customers, you will ensure that individual and business customers’ rights are met.
Consider augmenting your privacy notice, or supplying transparency information in a separate AI statement, or at the point a customer or individual interacts with an AI system (such as a chatbot). Additional guidance from regulatory authorities like data protection authorities may assist.
Data Privacy and Security: Implement robust data governance and security measures to protect sensitive information and respect individual privacy rights. Data protection risks in the supply chain of AI, for example training data used by providers of large models, should ideally be identified in contracts for the supply of AI services.
Where personal data is used by your organization to train and develop AI models, ensure that you have the legal right to use the data for training an AI system. Opinions and guidance from data protection authorities will assist on this issue.
Where personal data is being inputted into an AI system, for example as a prompt to retrieve information, ensure that the system will use the data in a lawful manner, including where it may be shared with software and model providers.
Personal data retrieved from an AI system should be handled in accordance with applicable data protection laws and your organisation’s privacy policy. Cybersecurity is also a vital component of AI governance.
Your Information Security team should be involved throughout the lifecycle of AI acquisition, development, deployment and decommissioning. Where necessary, your security team should carry out testing on the system to ensure it is robust and secure.
The data flows and interconnection with other systems, including those managed by vendors, should be documented and considered as part of your information security policy and any incident response and recovery plans.
Accountability and Oversight: Establish clear lines of accountability and empower independent oversight mechanisms to monitor the responsible use of AI. Accountability for AI systems should ideally be documented and established at all levels of the organization, from the IT implementation team, to middle and senior management, to the board.
Ensuring individuals are both accountable for the operation of an AI system, and have the necessary training and resources to carrying out oversight of the system, is vital in ensuring that any issues are addressed in a timely manner.
Documenting the levels of responsibility, as appropriate for your organisation, may take the form of an AI policy for your organisation. This information should also be incorporated into any information security and incident response policies your organization has.
Accountability for the Oversight of AI
While the board of directors and C-Suite executives are ultimately accountable for the strategic direction and responsible use of AI within the organisation, the responsibility for the oversight of AI in Governance, Risk, and Compliance will be shared across multiple stakeholders:
Board of Directors: Provide strategic guidance and oversight on the organization’s AI strategy, governance, and risk management processes. The board should be aware of when, how and why AI is used by your organisation, particularly where it is being used in decision-making that may affect individuals or customers.
Algorithmic bias has formed the basis of several class action lawsuits against boards in the US and Australia, and has been cited as breaches of fiduciary duty under company laws. The board should understand the risks and benefits of AI, including having access to an up-to-date risk register of all AI system risks.
C-Suite: Ensure AI initiatives are aligned with the organisation’s overall strategic objectives and that appropriate governance and risk management frameworks are in place. The C-Suite should be involved in ensuring that the use of AI aligns with the organisation’s objectives and needs, and in understanding the resources required to maintain an effective ethics and governance programme.
This should ideally include understanding the risks of individual AI systems, the supply chains behind them, and the impact on the broader data and security strategies for the organisation.
GRC Functions: Collaborate with IT, data, and analytics teams to integrate AI into GRC processes, manage AI-related risks, and ensure compliance. Governance, risk and compliance functions, including information security, privacy, legal, compliance and risk management should be involved at all stages of AI acquisition, development, deployment and decommissioning.
Understanding and managing the risks inherent in AI systems, including legal risks, is vital to utilizing AI effectively and managing any cyber incidents that may arise. Documenting levels of engagement, including updates and reviews of systems placed in operation, may be useful in setting expectations and ensuring clear lines of communication throughout the AI life cycle.
Ideally, this should be documented in your organisation’s AI policy or governance framework, but by augment existing policies such as data protection and cybersecurity.
IT and Data Teams: Develop, deploy, and maintain AI systems in alignment with the organisation’s governance and risk management requirements. IT and data teams should work closely with vendors when procuring and deploying systems, and with GRC teams when developing them in-house.
It is vital to communicate how AI systems operate to GRC and senior management; in return, IT teams should be empowered to understand the legal, ethical and governance implications of the AI systems for which they are responsible.
Ongoing dialogue across functions, including monitoring for system updates and developments, should be the goal.
Legal and Compliance: Ensure AI-powered systems and decision-making processes adhere to relevant laws, regulations, and ethical standards. Monitoring the increasing number of AI regulations is crucial to ensuring that systems are deployed lawfully.
In addition, legal and compliance teams should monitor opinions and developments from regulators and best practices from standards organisations, as the law governing the use of data used and created by AI develops.
Ensuring that this knowledge is cascaded to IT teams, and to senior management and the board where risks arise, is a vital part of AI governance. Legal and compliance teams may also be well-placed to lead on data and systems mapping, or in the drafting of an AI policy for your organization and any transparency notices that may be required.
As AI continues to evolve, it will become an essential component of GRC, enabling organizations to stay ahead of emerging risks, optimise their compliance strategies, and strengthen their overall governance structures. However, it is crucial that the implementation of AI is accompanied by robust governance frameworks, ethical considerations, and a focus on human-AI collaboration to ensure the responsible and effective use of this transformative technology.
#RISK AI, London, 19th November 2024
On the 19th of November #RISK AI London will focus on the common themes in the regulatory landscape:
- Accountability
- Privacy and data protection
- Safety, security, and reliability
- Transparency
- Bias
- Human agency and oversight
#RISK AI will help you:
- Develop clear policies and procedures to govern the use of Gen AI.
- Clearly define roles, responsibilities, and accountability for Gen AI deployment
- Implement robust data protection measures
- Comply with relevant data privacy regulations
- Adopt ethical principles and guidelines deployment of Gen AI systems.
- Understand bias-mitigation strategies to address potential algorithmic biases.
- Establish processes for human oversight
- Foster a culture of awareness and accountability around the use of Gen AI
- Stay informed about evolving regulations and industry standards related to Gen AI.
- Establish clear protocols for responding to Gen AI-related incidents
No comments yet