AI Governance: Ethics of Artificial Intelligence

Author: Muhammad Ali Safdar, Ph.D. program in Law at Friedrich Alexander Universität, 2019-2023

Editor: Danai Daisy Chirawu, Erasmus Mundus Joint Masters in Human Rights Policy & Practice (2021-2023), The University of Gothenburg, University of Deusto, University of Roehampton & The Arctic University of Norway, Bachelor of Laws (With Honours) (LLBS) (2012-2016) – the University of Zimbabwe

 

Abstract

Law and governance of Artificial Intelligence is a topic that demands attention now. With so many applications of Artificial Intelligence being developed, it is important to ensure that these technologies are used responsibly and ethically, requiring careful consideration of AI’s legal and ethical implications. This paper will explore the ethics of Artificial Intelligence through the lens of governance. It will further discuss the existing regulatory measures adopted by states and companies to govern AI and critique soft laws made by states and companies to bridge a governance gap. Furthermore, this paper will discuss the main actors responsible for AI governance and how they legislate to provide ethical grounds for AI. Finally, it will discuss the way forward for the governance and ethical previews of Artificial Intelligence. 

Introduction

AI is here to stay because it is present everywhere, and impacts many facets of our life. Robotics or Bots affect various decisions, including selecting applicants for interviews, approving bank loans, and even prescribing cancer treatments. It has been impossible to ignore the emergence of AI during the past few years. Many tech juggernauts, like Amazon, Facebook, and Microsoft, have built new research laboratories devoted to developing artificial intelligence, and significant investment has been made in AI start-ups. Due to its widespread use, artificial intelligence is frequently confused with software, which is reasonable as it is not easy to discover software that does not contain some form of AI. 

The oversight of AI is necessary, but it is unclear how that should or might be done presently. There is not yet a widely used method, except for some traditional governance approaches which have since become redundant. Countries have relied on soft law to govern AI but there remains an implementation gap as well as an updating, repealing and amendment of laws to meet the present dynamic and radically shifting world of AI. There is a dire need to overcome the governance and ethical problems in AI through regulatory measures. These regulatory measures should be made possible at the international level by framing common legislation and international law which binds states to govern AI through principles. Further, national governments should also prepare codes and regulations to maintain AI governance, as most global north nations have been doing. Lastly, the companies should also design codes and standards for AI governance. 

Conceptualizing Artificial Intelligence

Defining AI is the most challenging task for many experts. Many definitions and concepts sometimes overlap to put AI into a complex phenomenon. Furthermore, many viewpoints—from human-centric to rationalist and sceptic—have been created with AI in mind. However, through the lens of ethics, laws, and principle, AI is defined as “the ability of a non-natural entity to make choices by an evaluative process.” 

A ‘non-natural entity’ means a manufactured component that requires novel legal treatment. Whereas an ‘evaluation process’ entails a procedure that involves comparing principles against one another before making a decision. 

Rules and regulations can be compared. Rules are applied in an “all-or-nothing” manner. A good rule is conclusive when it applies to a specific situation. One rule cannot be legitimate if two other rules conflict. Principles justify a variety of activities, although they are not always decisive. Principles carry “weight,” as opposed to rules. The best way to resolve a conflict between two valid principles is to choose the side that is supported by the majority of those principles.

Why is the ethics of AI necessary 

AI Ethics is an exciting new field focusing on artificial intelligence’s ethical implications. AI has the potential to revolutionize the way we work and live, but it is important to consider the ethics of using it. It looks at how AI is used and how it can be used in the future. The goal is to create guidelines and principles that can be used to ensure that AI is used in a way that is ethical and takes into account the safety and rights of individuals. AI Ethics looks at privacy, data protection, and fairness in decision-making. We must also consider AI’s risks to our privacy and autonomy, and AI systems should be designed to protect us from exploitation and biased decisions. The development of ethical AI will help ensure that AI is used in a way that benefits everyone, not just a select few. It also looks at the potential for AI to be used for malicious or nefarious purposes. As AI becomes increasingly prevalent, it is important to ensure that it is used for good and not for ill. AI Ethics is a vital step in ensuring that the use of AI is ethical and responsible. 

Even if it develops goodwill, conducting business or carrying out any activity only based on trust is problematic from the outset. If one wants to regulate anything, especially something as flexible as AI, one must thoroughly understand and examine everything. Because AI systems are growing more complicated and tightly coupled, it is a concern that human autonomy will be reduced. It is not advised to set up the system and then leave it to operate without periodically checking on it. One day, there might be issues with the system’s evolution that could cause significant harm. 

Traditional Regulations of AI

Many traits of other emerging technologies that make them resistant to all-encompassing legislative solutions also apply to AI. A coordinated regulatory response, for instance, is challenging given that AI involves applications that span several industries, governmental agency authorities, and stakeholder groups. Beyond the usual regulatory agencies’ focus on dangers to people’s health, safety, and the environment, AI poses various questions and worries. In actuality, a lot of the problems brought about by AI are outside the purview of any existing regulatory body, including concern about technological unemployment, interactions between humans and machines, biased algorithms, and existential dangers from potential super-intelligence. 

Traditional preemptive regulatory decision-making is challenging, given the uncertainty around AI’s risks, benefits, and trajectory. For these reasons, it is safe to predict that extensive traditional regulation of AI will not come into effect for a while unless perhaps a catastrophe necessitates a harsh and undoubtedly inadequate regulatory response. Again, small portions of the more significant AI sector may be susceptible to conventional regulatory solutions, which should be pursued. However, these individual advancements in regulation will not be enough to address the existential, ethical, and safety issues that AI poses. We will require more.

Soft Law of AI and ineffective implementation

Soft laws establish important expectations but are not immediately subject to governmental enforcement. Private standards, voluntary initiatives, behaviour codes, professional norms, public-private collaborations, and certification programs are examples of what they can be. When an issue is not being addressed, these soft legal tactics are sometimes employed to “whitewash” the situation. Soft laws also frequently use ambiguous, generic language, making it challenging to gauge compliance. Because they are not immediately enforceable, these measures are inherently flawed.

Last but not least, in comparison to traditional government regulation, soft law measures typically do not give the public the same sense of security that the issues raised by new technology are being adequately addressed. An essential ancillary purpose of regulation is to reassure the public. In 2016, the Institute of Electrical and Electronics Engineers, one of the world’s major standard-setting and professional engineering organizations, unveiled arguably the most comprehensive soft law project for AI. 

Who will initiate AI ethics? 

There is an opportunity to create a complete set of standards that could be used globally because there are now no regulations that address the novel legal challenges created by AI. This would spare individual legislatures the expenses and challenges of enacting legislation independently and limit AI designers’ expenditures associated with adhering to numerous codes. So, there is a dire need to establish an international regulatory agency for AI, which may help balance nationalism with internationalism and avoid self-interest and altruism. If the power of AI can be harnessed, certain countries may already be aware of its immense potential and the ability to benefit the entire world. Because of their humanitarian values, these nations are more inclined to embrace a system of international norms.

New laws of AI should be made by legislation, not by judges or private enterprises. Government AI strategies often fall into at least one of the following three categories: fostering the development of the regional AI industry, regulating AI, and addressing the issue of AI-related unemployment. There may sometimes be friction between these groups, but they can assist one another. For example, To influence global governance and applications of this new technology, the UK will join the World Economic Forum’s new council on artificial intelligence. Further, The EU has started several projects to create a complete AI strategy, including its regulation. Furthermore, the USA also adopted a model for AI regulation, especially on safety, fairness, and governance in AI. Likewise, China’s State Council published “A Next Generation Artificial Intelligence Development Plan” in 2017, which includes economic growth from AI and AI security assessment and control capabilities, ethical standards and policy systems, and laws and regulations that were first established. 

Companies must create policies to establish ethical guidelines for AI researchers in collaboration with academics and experts. Some particular businesses have also developed their declaration of AI’s guiding principles or standards. Many IT corporations have taken precautions to protect their technology from falling into the wrong hands. DeepMind has established its own Ethics & Society Committee, which will research six essential topics: privacy, transparency, justice, and economic effect, including inclusion and equality. For instance, Google CEO Sundar Pichai established seven principles in June 2018 that the company will adhere to in its AI endeavours. 

Future of AI ethics/Way Forward

The future of AI ethics is an exciting and rapidly evolving field. As technology continues to develop, so do the ethical considerations around AI. With the proliferation of AI-powered products, services, and systems, we must ensure that their development and deployment are done responsibly and ethically. To this end, several organizations have recommended the way forward. These include the need for transparency, accountability, and fairness in developing AI products and services and safeguards to ensure that the AI systems are safe and secure. The appropriateness of ethical and legal restrictions on AI use must be assessed, and it may be necessary to establish the regulatory framework for definite legal outcomes. Additionally, it is important to ensure that ethical considerations are considered regarding the impact of technology on people’s privacy and security and its potential to cause harm. To maintain the safety and security of AI systems, explainability, openness, and trust must all be improved. At the social, personal, and individual levels, attention must be focused on how artificial intelligence might benefit people.

Conclusion

To sum up, making the ethical framework, codes, and new legislation is necessary to secure AI deployment and development. It has been observed that the previous approach to governing AI has become a redundant approach. Likewise, there are different soft laws established by national governments and companies, but their implementation is not up to the mark. There is a need to bridge the gap between these ethical issues and governing AI under the umbrella of international law and regulatory agencies. Moreover, national governments should frame legislation to secure the future, as already the USA, China, the UK, and France have implemented policies and legislating on AI. 

References

Henning, K. (2020). Gamechanger AI: How Artificial Intelligence is changing our world. Aachen, Germany: Springer.

Merchant, G. (2019). “Soft Law” Governance of Artificial Intelligence. UCLA: The Program on Understanding Law, Science, and Evidence (PULSE, 1-19.

Ramar, S. (2019). Artificial Intelligence: How it Changes the Future. Madurai: Independently.

Turner, J. (2019). Robot Rules: Regulating Artificial Intelligence. London: Palgrave Macmillan.

Bostrom, N., & Yudkowsky, E. (2014). The ethics of artificial intelligence. The Cambridge Handbook of Artificial Intelligence, 316, 334

Butcher, J., & Beridze, I. (2019). What is the state of artificial intelligence governance globally? The RUSI Journal, 164(5–6), 88–96. 

 Cath, C. (2018). Governing artificial intelligence: Ethical, legal and technical opportunities and challenges. Crossref.  

Dafoe, A. (2018). AI governance: A research agenda; future of humanity institute. Oxford, UK: University of Oxford. 

House of Lords Select Committee on Artificial Intelligence, AI in the UK: Ready, Willing and Able? Report of Session 2017–19 HL Paper 100, HTTP://​publications.​parliament.​uk/​pa/​ld201719/​ldselect/​ldai/​100/​100.​Pdf, accessed 30 December 2022

Nicholas Thompson, “Emmanuel Macron Talks to Wired About France’s AI Strategy,” Wired, 31 March 2018, HTTP://​www.​wired.​com/​story/​emmanuel-macron-talks-to-wired-about-frances-ai-strategy/​, accessed 30 December 2022

This article is written within the Academic Essay Project (AEP) organised by LAWELS. AEP aims to increase the number of quality academic writings on legal topics, encourage young lawyers to participate in academic writing, and lay the foundation of an online database on legal science. The team of legal editors and legal writers share their knowledge through high-end essays that we are publishing on our website and social media accounts for the world to read and learn from.

The articles on the LAWELS platform are not, nor are they intended to be, legal advice. You should consult a lawyer for individual advice or assessment regarding your own situation. The article only reflects the views of the author.