Artificial intelligence (AI) has become ubiquitous in daily life, from healthcare and finance to virtual assistants and autonomous vehicles. AI has the potential to revolutionize how humans live and work and to accelerate transformations in the way humans produce information and interact with technology. But it also exposes significant ethical, legal, and social challenges. One of the most pressing concerns is the lack of accountability and transparency in AI decision-making processes, which can lead to bias, discrimination, and harm to individuals and society.
Auditing an AI model through technological, regulatory, data, and process views is fundamental to ensuring the responsible and ethical use of AI.
There are several common challenges related to enterprise adoption of AI: gaps in the knowledge of how AI works, difficulty analyzing and explaining the outputs of models, and inappropriate assignment of accountability and ownership.
Therefore, auditing an AI model through technological, regulatory, data, and process views is fundamental to ensuring the responsible and ethical use of AI.
The Importance of AI Audit
An AI audit is a review of AI systems, algorithms, and data to identify and mitigate potential risk, threats, and impact. It involves assessing the performance, reliability, security, and ethical implications of AI systems and evaluating their compliance with legal and regulatory requirements. An AI audit can be used to identify and address issues such as data quality, algorithmic bias, fairness, privacy, and security, which are often overlooked or ignored in the rush to develop and deploy AI. The audit can be performed at different stages of the AI life cycle, from design and development to deployment and operation.
AI audit can also help build trust and confidence in AI systems by providing evidence of their reliability, fairness, and accountability. It can enhance transparency and communication between developers, users, and regulators and foster a culture of responsible and ethical AI development.
Achieving Ethical AI
However, AI audit is not without its own challenges and limitations. One of the main challenges is the lack of standardization and consensus on what constitutes good AI governance and ethics. There is no universal framework or methodology for conducting an AI audit, and one auditor may have different criteria and standards from the next. This can create confusion and inconsistency and undermine the credibility and effectiveness of AI audit.
AI has the potential to transform lives for the better, but only if it is developed and used in a responsible and ethical way.
However, recently some governments have issued or begun drafting regulations to promote the safe and ethical development of AI models:
- The European Union has proposed the “Regulation on a European Approach for Artificial Intelligence” to establish common rules on the use of AI and address ethical and security challenges.1
- The United Kingdom is exploring ethical regulations on AI and may develop further guidance through the Office for Artificial Intelligence.2
- China has issued ethical guidelines for AI development and is exploring regulations to address ethical and safety challenges.3
- In the United States, there is no specific federal law on AI, but there are ongoing discussions about the need for national regulations.4
- Singapore has issued guidelines on AI through the “Model AI Governance Framework” to promote ethical and responsible use of AI.5
AI has the potential to transform lives for the better, but only if it is developed and used in a responsible and ethical way.
AI systems often use complex and sophisticated algorithms based on large datasets, which can be difficult to understand and interpret. Moreover, AI systems can learn and adapt over time, which can make it hard to predict their behavior or anticipate their effects. Planning to harness the power of AI is not a simple matter of purchasing and installing the emerging technology solution because the very nature of AI impacts many business areas. If organizations want to equip themselves with such disruptive tools, they must prepare an adequate, structured approach to supervise and control effective adherence to quickly evolving regulations.
Every regulator, global or local, is striving to ensure the accountability and transparency of AI systems. This is a critical step toward building trust and confidence in these systems and will help ensure that they are developed and used in a responsible and ethical way. The roadmap for achieving this goal consists of several steps:
- Establish clear ethical and legal frameworks—This should include developing guidelines for data collection, use, and sharing as well as guidelines for ensuring transparency, fairness, and accountability in AI decision-making processes.
- Design and develop AI systems with ethics in mind—This should happen from the outset. This means that developers should prioritize fairness, transparency, and accountability in the design and development of AI algorithms and systems.
- Address algorithmic bias—Algorithmic bias is a significant ethical issue in AI systems because it can lead to discrimination and harm to vulnerable populations. These biases develop in models in different ways, often reflecting the trends and biases present in the data with which they are trained. Common biases include temporal, sampling, cultural, and social. Developers and auditors should be vigilant when identifying and addressing algorithmic bias in AI systems.
- Ensure data privacy and security—Data privacy and security are critical concerns in AI systems because they can impact individual privacy and data protection. AI systems should be designed and developed with strong data privacy and security protections in place and should be audited regularly to ensure compliance with relevant regulations.
- Foster transparency and communication—Transparency and communication between developers, users, and regulators is critical for building trust and accountability in AI systems. Developers should be transparent about the algorithms and decision-making processes used by AI systems and should communicate clearly about how AI systems are being used and for what purposes.
- Foster collaboration and innovation—Fostering collaboration and innovation is critical for ensuring that AI systems are developed and used in a responsible and ethical way. This means that stakeholders from different sectors, including academia, industry, government and civil society, should work together to share best practices, identify new challenges, and promote ethical and responsible AI development.
A Proposed AI Audit Approach
A proposed approach to auditing an AI model to determine whether it meets explainable, transparent, and ethical criteria, while not necessarily exhaustive, is depicted in figure 1.
Planning
In this phase of the audit, all AI models are identified, and a risk assessment is conducted in line with internal or external regulations to ascertain the AI models that will fall within the scope of the audit.
Subsequently, the main stakeholders (e.g., general counsel, regulation specialists, data scientists, and data owners) must be identified to acquire the necessary resources.
In the final stages of planning, it is necessary to identify the audit team that possesses the specific skills required by the domain in which the audited model operates.
Data Management Auditing
AI models are strictly dependent on the data on which they are trained and normally operate. (Consider the expression “garbage in, garbage out.”6 ) Therefore, it is necessary to verify the data management processes and the controls designed and implemented that affect AI data.
Algorithm Auditing
The development of an AI model must follow a structured approach that starts with the analysis of the business problem to be addressed, followed by the development phase, subsequent transport into production, and continuous management.
To ensure that an enterprise safeguards reputation, brand trust, risk mitigation, and effective monetization of AI capabilities, it is necessary to directly manage all audits and tests related to the adoption of AI trust principles:
- Fair—Reduction of bias in algorithms to avoid unfair discrimination against certain groups
- Transparent/explainable—Clarity and openness about the inner workings of an AI system and the ability to explain in a comprehensible way how an AI model reached a specific decision or prediction
- Reliable—The ability of an AI system to provide accurate, consistent, and reliable results over time and in different situations. For an AI system to be considered trustworthy, it must be able to maintain high standards of performance and consistency, demonstrating sufficient robustness and stability.
- Secure—Implementation of robust security measures to protect AI systems, data, and the users involved. Security in AI is critical to avoid threats, malicious manipulations, privacy violations, and other risk associated with the deployment of AI technologies.
- Private—The protection and confidential treatment of information and data associated with AI models. Privacy in AI is of particular importance when it comes to handling and manipulating sensitive or personal data.
Closing
During the last phase of the audit, it is necessary to document the audit findings, including sources of risk, gaps, and opportunities for improvement, and provide recommendations for remediation and mitigation actions for identified gaps.
Conclusion
AI audit is a critical activity to ensure the accountability and transparency of AI. It can help identify and mitigate potential risk and harm, build trust and confidence in AI systems, and promote responsible and ethical AI development. However, AI audit faces challenges and limitations, such as the lack of standardization and the complexity of AI systems. These challenges must be addressed through collaboration between regulatory authorities and auditors and use of technological innovation (e.g., automatic data analysis software or interpretability methods of AI models) to ensure the responsible and ethical use of AI to the benefit of enterprises and society.
Endnotes
1 EU Artificial Intelligence Act, “The Act,” http://artificialintelligenceact.eu/the-act/
2 GOV.UK, “Office for Artificial Intelligence,” http://www.gov.uk/government/organisations/office-for-artificial-intelligence
3 International Research Center for AI Ethics and Governance, Institute of Automation, Chinese Academy of Sciences, “The Ethical Norms for the New Generation Artificial Intelligence, China,” 2021, http://ai-ethics-and-governance.institute/2021/09/27/the-ethical-norms-for-the-new-generation-artificial-intelligence-china/
4 US Department of State, “Artificial Intelligence (AI),” http://www.state.gov/artificial-intelligence/
5 Personal Data Protection Commission Singapore, “Singapore’s Approach to AI Governance,” http://www.pdpc.gov.sg/help-and-resources/2020/01/model-ai-governance-framework
6 Rouse, M.; “Garbage In, Garbage Out,” TechDictionary, 4 January 2017, http://www.techopedia.com/definition/3801/garbage-in-garbage-out-gigo
Denis Piazzi | CISA
Is Italian senior manager of a worldwide consulting firm with many years of experience in information and communication technologies (ICT) audit and governance, risk, and compliance (GRC). He has held managerial roles and completed projects for multinational clients related to ICT audit and GRC.