A survey, a training programme, and an experiment in assurance:
An update on our next steps towards safe and fair AI for Dubai

Algorithms and data sharing are becoming increasingly important in our global quest for smarter city services, better quality of life, and sustainable development. However, the ubiquity of algorithms and the data they sift through makes the study of their consequences — both intended and unintended — far more important as well. 

More and more companies are relying on AI in their operations, which means the potential for people to be impacted by AI decisions is amplified. Research by the Economist Intelligence Unit shows 86% of banks and insurance companies plan to increase AI-related investment into technology by 2025.[1] Meanwhile, the Middle East is expected to see some USD 320 billion in benefits from the use of AI by 2030.[2]

And so, organisations are seeing the importance of AI ethics and AI ethical risk programmes, with over 50% of executives reporting “major” or “extreme” concern over reputational and ethical risks of AI in their organisation.[3]

Nor are these risks fanciful. Algorithms have already caused well-publicised instances of financial and reputational damage. Knight Capital went bankrupt because of a glitch in its algorithmic trading system. Amazon was forced to scrap its AI HR recruitment tool that showed bias against women.[4] And COVID-19 resulted in the UK government’s controversial use of a grade-assigning algorithm after exams were cancelled. In the following year, grade assessments were put back in the hands of human teachers.[5]   

Hence why AI ethics, a relatively new field, is now taking centre stage for policy makers, and public and private sector organisations. Digital Dubai wants to help shape the global conversation around AI ethics, working to establish Dubai as a thought leader in AI adoption across the public and private sectors. In 2017, Digital Dubai collaborated with IBM to launch its AI Lab to develop potential AI use cases. This was followed in 2019 by the AI Ethics Toolkit[6], and the formation of an AI Ethics Board drawing in local and global stakeholders to inform our direction.

Building on this groundwork, Digital Dubai now wants to provide further practical help to the development of Dubai’s AI ecosystem. To get an idea of the demand for AI use cases in Dubai, and the challenges faced by the broader ecosystem, we are conducting a comprehensive survey of the public and private sectors with the Mohammed bin Rashid School of Government (MBRSG) and the Mohamed bin Zayed University of Artificial Intelligence (MBZUAI). If you have received an invitation to answer the survey, please contribute to our effort by taking a few minutes to do so! Like an algorithm, the more high quality data we can feed to our survey, the better the decisions we can derive from it.

Understanding demand and potential ethical AI use cases is but the first step. Organisations need practical and educational support in mitigating the ethical risks of AI systems. According to Boston Consulting Group research[7], AI fails can only be effectively countered by moving beyond algorithmic fairness and bias analyses to consider potential second- and third-order effects on safety, privacy and society at large. Here, the challenge is in execution, because companies often don’t know how to bridge the gap between principles and concrete actions.

According to research published by the Centre for Data Ethics and Innovation[8], training and education is a vital element of transforming responsible AI into practice. This requires a slow yet determined effort to lay the foundations of capability, training and technical knowledge that can be built into a full-fledged ecosystem.

Hence why, in 2021, a meeting of the Dubai AI Ethics Board gave rise to a training initiative to bridge the gap between the technological side of AI, its organisational applications and awareness. The training initiative will bring together leading practitioners across the public and private sector to build capacity among AI decision-makers, policy leaders, and data champions within government.

The central aim of the course is to empower organisations, and to give them the tools, to consider AI ethical risks when designing and implementing AI systems in Dubai. It will explore the possibilities of ethical AI use cases to support business, and delve into background considerations such as data management to support ethical AI.

Globally, the conversation around AI ethics is expanding because the range of actors involved in, and impacted by, AI systems is expanding. Ideally, these systems need to be reviewed and kept in check by a range of actors including regulators, frontline users, and developers to ensure they are functioning as they should.

Yet, the challenge here is that it is difficult for non-specialised actors to ensure trustworthy AI systems. This has led to thought leaders in the field to call for an ecosystem of AI assurance. According to Ahamat, Chang and Thomas et al (2021)[9], the term assurance originates from the accounting profession but has since been adapted to AI. They offer a definition of assurance as follows:

“Assurance covers a number of governance mechanisms for third parties to develop trust in the compliance and risk of a system or organisation. In the AI context, assurance tools and services are necessary to provide trustworthy information about how a product is performing on issues such as fairness, safety or reliability, and, where appropriate, ensuring compliance with relevant standards.”

A robust assurance ecosystem for AI requires processes, technical standards, certification schemes, and awareness raising. To catalyse this, Digital Dubai is working with partners to better understand the current state of, and appetite for, AI assurance in Dubai.

Our goal is to run three trials on AI assurance – ensuring that the ethical and regulatory elements of AI develop alongside technology and business use cases. Going further than our existing toolkit, our approach to building an AI assurance ecosystem in Dubai will hopefully provide detailed technical and process mitigations that keep AI systems operating on the right side of safety and fairness.

Assurance is not a quick fix. It needs to be understood through multiple stakeholders and viewpoints, investigating multiple-order effects and relying on a social as well as technical approach.

Irrespective of the challenges, however, we are convinced of the usefulness of the assurance exercise. This year’s proposal from the European Commission for heavier regulation of AI raises the prospect of increased AI audits.  It may be some time yet before that audit questionnaire lands in your inbox, but in our opinion audits and a broader assurance ecosystem are integral to growing the digital economy and creating the AI-enabled public services of tomorrow. 

[5] https://www.bbc.com/news/education-56157413


[6] https://www.digitaldubai.ae/docs/default-source/ai-principles-resources/ai-ethics.pdf

[7] https://www.bcg.com/publications/2020/six-steps-for-socially-responsible-artificial-intelligence

[8] https://cdei.blog.gov.uk/2021/04/15/the-need-for-effective-ai-assurance/

[9] https://cdei.blog.gov.uk/2021/04/15/the-need-for-effective-ai-assurance/#:~:text=In%20the%20AI%20context%2C%20assurance,ensuring%20compliance%20with%20relevant%20standards.