What overarching goals should an AI system have? How should it behave, and what values should it follow? Our four non-binding AI principles are our attempt to answer crucial questions such as these.

These high-level statements lay out our aspirations and roadmap for the behaviour of AI systems as they get smarter and become an integral part of our lives. The four main principles contain sub-principles that help us more clearly define our goals for AI design and behaviour.

We want these principles to become a common foundation of agreement for industry, academia and individuals in navigating the rapidly developing world of AI. We consider these principles a collaborative work in progress, and value your feedback in refining them.

Ethics Ethics

We will make AI systems fair

1. Data ingested should, where possible, be representative of the affected population

2. Algorithms should avoid non-operational bias

3. Steps should be taken to mitigate and disclose the biases inherent in datasets

4. Significant decisions should be provably fair

We will make AI systems accountable

1. Accountability for the outcomes of an AI system lies not with the system itself but is apportioned between those who design, develop and deploy it

2. Developers should make efforts to mitigate the risks inherent in the systems they design

3. AI systems should have built-in appeals procedures whereby users can challenge significant decisions

4. AI systems should be developed by diverse teams which include experts in the area in which the system will be deployed

We will make AI systems as explainable as technically possible

1. Decisions and methodologies of AI systems which have a significant effect on individuals should be explainable to them, to the extent permitted by available technology

2. It should be possible to ascertain the key factors leading to any specific decision that could have a significant effect on an individual

3. In the above situation we will provide channels through which people can request such explanations

We will make AI systems transparent

1. Developers should build systems whose failures can be traced and diagnosed

2. People should be told when significant decisions about them are being made by AI

3. Within the limits of privacy and the preservation of intellectual property, those who deploy AI systems should be transparent about the data and algorithms they use

Security Security

AI systems will be safe, secure and controllable by humans

1. Safety and security of the people, be they operators, end-users or other parties, will be of paramount concern in the design of any AI system

2. AI systems should be verifiably secure and controllable throughout their operational lifetime, to the extent permitted by technology

3. The continued security and privacy of users should be considered when decommissioning AI systems

4. AI systems that may directly impact people’s lives in a significant way should receive commensurate care in their designs, and;

5. Such systems should be able to be overridden or their decisions reversed by designated people

AI systems should not be able to autonomously hurt, destroy or deceive humans

1. AI systems should be built to serve and inform, and not to deceive and manipulate

2. Nations should collaborate to avoid an arms race in lethal autonomous weapons, and such weapons should be tightly controlled

3. Active cooperation should be pursued to avoid corner-cutting on safety standards

4. Systems designed to inform significant decisions should do so impartially

Humanity Humanity

We will plan for a future in which AI systems become increasingly intelligent

1. Governance models should be developed for artificial general intelligence (AGI) and superintelligence

2. AGI and superintelligence, if developed, should serve humanity as a whole

3. Long-term risks of AI should be identified and planned for

4. Recursively self-improving AI development should be disclosed and tightly monitored and controlled for risk

We will give AI systems human values and make them beneficial to society

1. Government will support theresearch of the beneficial use of AI

2. AI should be developed to align with human values and contribute to human flourishing

3. Stakeholders throughout society should be involved in the development of AI and its governance

                 Inclusiveness               Inclusiveness

We will promote human values, freedom and dignity

1. AI should improve society, and society should be consulted in a representative fashion to inform the development of AI

2. Humanity should retain the power to govern itself and make the final decision, with AI in an assisting role

3. AI systems should conform to international norms and standards with respect to human values and people rights and acceptable behaviour

We will respect people’s privacy

1. AI systems should respect privacy and use the minimum intrusion necessary

2. AI systems should uphold high standards of data governance and security, protecting personal information

3. Surveillance or other AI-driven technologies should not be deployed to the extent of violating internationally and/or UAE’s accepted standards of privacy and human dignity and people rights

We will share the benefits of AI throughout society

1. Development of AI systems will be matched by a response to its impact on employment

2. AI will be used to help humans retain purpose and flourish mentally, emotionally and economically alongside AI

3. Access to training, opportunity and tools will be made available to all

4. Education should evolve and reflect the latest developments in AI, enabling people to adapt to societal change

We will govern AI as a global effort

1. Global cooperation should be used to ensure the safe governance of AI

2. Government will support the establishment of internationally recognised standards and best practices in AI, and when they are established, shall adhere to them

Contribute to the future development of the Dubai AI Principles

The Dubai AI Principles are designed to inspire and inform future AI behaviour. Indeed, collaboration from all stakeholders is vital in ensuring their sustainability and usefulness. Your feedback is therefore warmly appreciated.

Share feedback


Have a Question?
  • AI ethics is a growing topic of discussion on the international stage. Governments, NGOs and companies are trying to understand how they can develop AI systems in an ethical way to avoid harm to individuals while also safeguarding themselves from reputational and legal damage. There is an understanding that regulation may be needed at some stage, but the field is not yet sufficiently mature. The pace of advancement is also too rapid to be able to codify the ecosystem within a regulatory framework.

    There is still a need for guidance and collaboration, both at individual and organisational level. For regulators, the challenge is to start conversations about how to regulate this emerging technology without stifling innovation and advancement. There is also the need to begin forming a unified view of best practices in AI development, and to offer clarity on ethical frameworks that can inform this development. Dubai wishes to be part of this global conversation, and wants to establish itself as a thought leader in AI adoption for both public and private sectors. We want to create a platform, and to offer resources, that can spark meaningful debate. Our toolkit is a collaborative work, and is designed to draw in stakeholders both local and global in its refinement and evolution.

    Apart from theoretical elements and principles, the Dubai AI Ethics Toolkit also offers resources designed to be clear, accessible and practical in implementation so that those involved in the development and use of AI systems can already begin to voluntarily benchmark themselves against best practice.

  • Artificial intelligence (AI) is the capability of a functional unit to perform functions that are generally associated with human intelligence such as reasoning, learning and self-improvement.

    An AI system is a product, service, process or decision-making methodology whose operation or outcome is materially influenced by artificially intelligent functional units. A particular feature of AI systems is that they learn behaviour and rules not explicitly programmed. Importantly, it is not necessary for a system’s outcome to be solely determined by artificially intelligent functional units in order for the system to be defined as an artificially intelligent system. Simply put, hybrid systems with both conventional and AI capabilities would still qualify as AI systems. If your system has any AI component within it, the overall system can be considered an AI system.

  • Insofar as our remit is concerned, ethics cover the concepts of fairness, accountability, transparency and explainability (FATE). To keep things manageable and the discussion focused, our ethics model does not currently include privacy concerns, model accuracy (except insofar is fairness and redress are concerned), employment, or any other AI-related issues besides FATE.

    1. Dubai’s AI Principles - High level, non-statutory (not legally binding), non-audited set of statements that indicate how we want to develop AI in Dubai. There are 4 overarching principles with sub-principles under each of them.

    2. Dubai’s Ethical AI Guidelines - Like the Principles, the Guidelines are also non-statutory and non-audited. They are more practical and apply to specific sets of actors. Only Principle 3 (Ethics) has currently been expanded into a set of guidelines. The other principles may be developed into guidelines in time, though not necessarily by Smart Dubai. We’re happy for the community to take this on.

    3. Self-Assessment Tool – Designed to translate the guidelines into practical application and assessment. The self-assessment tool allows entities to first classify their AI system, and then assess their ethical score based on the system type, and can be used by both AI developers and client organisations. It is a useful tool for inclusion in RFPs by AI client organisations to ensure that vendor solutions meet ethical criteria. Best practice calls for AI operators to take the lead in ensuring that the , the self-assessment tool questions have been answered correctly with relevant evidence.

    4. Pointers to key literature for technical experts - This document aims to help technical experts (e.g. data scientists, machine learning engineers, AI engineers) investigate how to apply ethics to AI systems. This is a work in progress given that ethics in AI is a rapidly evolving field with further advances imminent. AI developers can evaluate for themselves the suitability and risks of each decision-making method.

  • The Dubai AI Principles and the guidelines derived from them are not mandatory, audited or legally binding. The aim is to create a platform for conversation around ethics in AI systems, with developers and AI system operators choosing to voluntarily abide by the guidance provided. Eventually, with community buy-in, these principles and guidelines might serve as a basis for AI system regulation when it is introduced.
  • Participation can be beneficial for AI developer and operator organisations for several reasons. By contributing, entities and individuals can:

    • Self-assess and mitigate unintentional unethical behaviour by their AI systems that might otherwise lead to public backlash, reputational damage or even legal liability

    • Gain a clear understanding of the meaning of ethics in AI, and communicate this effectively to stakeholders and customers – thereby increase trust in AI systems and improving acceptance

    • Contribute to the development of a unified view on best practices in AI development

    • Become part of a conversation that is helping shape best practices that might eventually form the basis of formal regulations

  • The toolkit is designed to be useful for AI developer and operator organisations across the public and private sectors. In the near term however, only public sector entities, together with private companies who develop AI systems for them, are actively expected to use the toolkit.

    The guidelines refer to ‘AI developer organisations’ and ‘AI operator organisations’. The definitions of these are as follows:


    • An AI developer organisation is an organisation which does any of the following:

    • Determine the purpose of an AI system;

    • Design an AI system;

    • Build an AI system, or;

    • Perform technical maintenance or tuning on an AI system

    N.B. The definition applies regardless of whether the organisation is the ultimate user of the system, or whether they sell it on or give it away

    • An AI operator organisation is an organisation which does any of the following:

    • Use AI systems in operations, backroom processes or decision-making

    • Use an AI system to provide a service to end-user

    • Is a business owner of an AI system

    • Procure and treat data for use in an AI system, or;

    • Evaluate the use case for an AI system and decide whether to proceed

    N.B. This definition applies regardless of whether the AI system was developed in-house or procured.

    It is possible for organisations to be both an AI developer organisation and an AI operator organisation.

  • AI already surrounds us, but some applications are more visible and sensitive than others. This toolkit is applicable only to those AI systems that make or inform ‘significant decisions’ - decisions that can have significant impact on individuals or society as a whole. It also applies to ‘critical decisions’, which are a subset of significant decisions. Full definitions are available in the guidelines.

  • The Dubai AI Principles serve as the foundation for the use of AI in Dubai. Entities should consider them while developing and operating AI systems and when setting strategy.

    The Dubai AI guidelines can be applied by integrating them with the entity’s existing policies, standards and other documentation. It is important that they are used throughout the development and deployment process in order to achieve ethical design, rather than being an afterthought. Employees should be educated about the meaning and importance of ethical design throughout the process, and the guidelines can act as an educational document in this case.

    The self-assessment tool can be used directly by AI developer organisations to ensure that they meet the standards expected of public sector AI systems. AI operator organisations can include parts of the self-assessment tool in their RFPs if they work with vendors. AI operator organisations, as actors who ultimately deploy AI systems to serve people, are responsible for ensuring that adequate ethical standards have been met by internal development teams, and by the vendors they choose to do business with.

  • The toolkit ultimately serves the people and end-users who depend on, or are affected by, AI systems. The term used in the toolkit is ‘AI subject’. An AI subject is a natural person who is any of the following:

    • An end-user of an AI system

    • Directly affected by the operation of or outcomes of an AI system, or;

    • A recipient of a service or recommendation provided by an AI system

    The toolkit helps ensure that people are treated fairly and are able to challenge decisions they perceive to be unfair, and that they have access to important information delivered in understandable manner about how AI affects their lives.

  • The rest of the principles (Humanity, Inclusiveness and Security) will be developed further in partnership with other government entities, and with contributions from the wider community.