Singapore Launches World’s First Artificial Intelligence Governance Self-Test | Denton

I. Launching AI Verify

At the annual meeting of the World Economic Forum held in Davos in May this year (2022), Minister of Communications and Information Josephine Teo (Minister Teo) announced Singapore’s launch of AI Verify , which is the world’s first AI governance testing framework and toolkit, to provide companies with a way to measure and demonstrate the security and reliability of their artificial intelligence (AI) products and services.

The launch of AI Verify in Singapore follows the launch of the Model AI Governance Framework in 2020 and the National AI Strategy in 2019. AI Verify seeks to promote transparency on the use of AI between companies and their stakeholders through self-performed technical testing and process verification. Developed by the Infocomm Media Development Authority and the Personal Data Protection Commission, AI Verify places Singapore at the forefront of international discourse regarding the ethical use of AI.

AI Verify was launched as a minimum viable product (the MVP) that will be subject to further development. Organizations can participate in piloting the MVP where they can get early access to the MVP and use it to perform self-tests on their AI systems and models. It also helps shape an internationally applicable MVP to reflect industry needs and develop international standards.

II. AI Governance Testing Framework and Toolkit

Products and services are increasingly using AI to offer greater personalization and make autonomous predictions. There is a strong public desire for AI systems to be fair, explainable, and safe, and for companies that use AI to be transparent and accountable.

The MVP includes a “testing framework” and a “toolbox”. They allow developers to authenticate the claimed performance of their AI systems against standardized tests. However, the MVP does not set ethical standards, rather it provides a means for developers and owners of AI systems to expose and demonstrate their claims about the performance of their AI systems against ethical principles of AI.

A. The testing framework

The testing framework addresses five (5) main areas of concern for AI systems (the 5 pillars) which covers 11 international ethical principles in AI (the AI Ethics Principles).

The 5 Pillars are:

  1. transparency on the use of AI and AI systems;
  2. understanding how an AI model arrives at a decision;
  3. ensure the security and resilience of AI systems;
  4. ensuring fairness and freedom from unintended discrimination by AI; and
  5. ensure proper management and oversight of AI systems.

The ethical principles of AI are:

  1. Transparency;
  2. Explainability;
  3. repeatability or reproducibility;
  4. Security;
  5. Security;
  6. Robustness;
  7. Justice;
  8. Data governance;
  9. Responsibility;
  10. Human agency and supervision; and
  11. Inclusive growth, societal and environmental well-being.

The testing framework defines the ethical principles of AI and assigns a set of testable criteria to each principle. The test framework provides test processes, which are feasible steps to perform to check whether each testable criterion has been met. The test framework also defines well-defined parameters for the metrics to be measured and thresholds that define acceptable values ​​or benchmarks for the selected metrics.

B. The toolbox

The toolkit covers technical tests of fairness, explainability and robustness. It provides a user interface to guide users through the testing process, supports some binary classification and regression models, and produces a summary report to help developers and system owners interpret test results. It is packaged in a Docker container to be easily deployed in the user’s environment.

III. The development of international standards on AI governance

AWS, DBS Bank, Google, Meta, Microsoft, Singapore Airlines, NCS, Standard Chartered Bank, UCARE.AI and X0PA.AI have tested and/or provided feedback on the MVP. Going forward, Singapore aims to work with owners or developers of AI systems around the world to gather and establish industry benchmarks for the development of international standards on AI governance. For the interoperability of AI governance frameworks and the development of international AI standards, Singapore has participated in ISO/IEC JTC1/SC 42 on AI and is working with the US Department of Commerce and other like-minded countries and partners.

IV. Final Thoughts

Developments in the digital space are changing rapidly and regulations must be able to keep pace; rule-making will in turn require policymakers and technology leaders to be dynamically and collaboratively involved, with the ultimate goal of enabling the exploitation of new technologies while guarding against the risks that come with them.

Dentons Rodyk thanks and acknowledges trainee in practice Tan Wei En for his contributions to this article.

Comments are closed.