Latest Headlines
Ethical Problems in AI and Digital Legislation in Nigeria (Part 1)
Introduction
In the twenty-first century, technology does not merely evolve, it accelerates. One of the most significant accelerants of this digital age is Artificial Intelligence (AI), a transformative force promising efficiency, personalisation and automation on a scale never before imagined. But, with this great promise comes some unaccustomed perils, particularly in developing countries like Nigeria where the rush to digitise has often outpaced the legal frameworks necessary to protect citizens from the unintended consequences of AI-driven systems. As machines increasingly make robotic decisions that affect human lives, from granting loans to profiling individuals, one urgent question emerges: how can we ensure that AI actually serves humanity, rather than undermine it?
What is AI?
Before answering the above question, we must understand what AI actually is. The European Commission, defines AI as “systems that display intelligent behaviour by analysing their environment and taking actions – with some degree of autonomy – to achieve specific goals”.
The Organisation for Economic Co-operation and Development (OECD) offers a similar definition, describing AI as “a machine-based system that, for a given set of human-defined objectives, makes predictions, recommendations or decisions influencing real or virtual environments.” The Alan Turing Institute, a pioneer in ethical technology research, defines AI as “The design and study of machines that can perform tasks that would previously have required human (or other biological) brainpower to accomplish”. Across these definitions, one theme clearly stands out: AI systems operate autonomously, adaptively and at scale, including all qualities that make them powerful, but also potentially dangerous without adequate oversight.
AI and Human Rights
Let’s now contrast this, with the concept of Human Rights. Human rights are those fundamental freedoms and entitlements that belong to every person, simply by virtue of being human. These include the right to life, liberty, dignity, privacy, equal treatment under the law and freedoms of movement, association, assembly, conscience and religion. The Universal Declaration of Human Rights (UDHR) affirms that human rights are “those inherent to all human beings, regardless of race, sex, nationality, ethnicity, language, religion, or any other status. These rights are not granted by any State, but are inherent to all individuals simply by virtue of being human”. The African Charter on Human and Peoples’ Rights expands this notion to include collective rights and cultural identity, emphasising human dignity as central to governance. In Nigeria, these rights are not abstract ideals; they are guaranteed by the Constitution of the Federal Republic of Nigeria, 1999 (as amended), with Section 37 specifically enshrining the right to privacy.
The Intersection Between AI and Human Rights
The problem lies at the intersection, between AI and human rights. While AI expands what is technologically possible, it also stretches the boundaries of what is legally and ethically permissible. AI systems are trained on vast datasets, often containing sensitive personal information from biometric scans to financial histories. Without adequate guardrails and constant monitoring, these systems can entrench bias, erode individual freedoms and autonomy, and violate constitutional protections. They do not need to be malicious nor deliberate, to be dangerous. A poorly designed algorithm can discriminate just as effectively as a prejudiced human; but, only faster, more invisibly, and on a greater scale.
Nigeria is acutely aware of this reality. As digital infrastructure has expanded through platforms such as the National Identity Management Commission (NIMC), Bank Verification Number (BVN), Fintech apps, and e-government portals, the risks to privacy and dignity have also greatly multiplied. Recognising the inadequacy of its prior regulatory framework, the Nigeria Data Protection Regulation (NDPR) 2019, the country enacted the Nigeria Data Protection Act (NDPA) in June of 2023. The NDPA is not a complementary statute; it repealed and replaced the NDPR, establishing a more comprehensive and enforceable legal framework for data protection in Nigeria. With this Act, Nigeria has finally aligned herself with global standards, signalling that data protection is not a luxury, but a legal imperative in a digital economy.
The NDPA introduces a paradigm shift. It creates the Nigeria Data Protection Commission (NDPC) as the regulatory authority, mandates lawful and transparent data processing, and codifies the rights of data subjects including rights to access, rectification, erasure, data portability and objection to automated decision-making. Very significantly, it also requires Data Protection Impact Assessments (DPIAs) for high-risk processing, reinforcing the principle that privacy and ethical risks must be considered before deploying any data-driven system. This is not just a compliance issue; it is a crucial human rights issue.
Ethics by Design
Yet, legislation alone is not enough. The critical missing link is the integration of Ethics by Design, a proactive approach to AI governance that embeds ethical considerations directly into the technical architecture and policy-making processes of AI systems. Ethics by Design is not a slogan; it is a philosophy of responsibility. It asks: Are the algorithms fair? Can their outcomes be explained? Do they respect user autonomy and dignity? Who gets to design them, and who gets to challenge their decisions? These are the serious ethical questions Nigeria must now confront, if it is to create AI systems that uplift rather than oppress.
The relevance of this approach becomes painfully clear, when we examine incidents like the failed launch of the NIMC Mobile ID App in 2020. The app, initially released without proper vetting or public notice, generated digital identities for unintended users and exposed personal data, prompting legal challenges under Section 37 of the Constitution. Had a Data Protection Impact Assessment been conducted as now required under the NDPA, this fiasco might have been avoided. Such events, illustrate how technological missteps can quickly morph into constitutional violations.
AI’s Capacity for Systemic Bias
Furthermore, AI’s capacity for systemic bias is not merely hypothetical. Consider financial platforms using AI-driven credit scoring models in Nigeria. If trained on flawed or exclusionary data, these models may deny credit to entire demographic groups not because of poor creditworthiness, but because of historical marginalisation. Similarly, facial recognition systems have been shown globally to misidentify individuals with darker skin tones, raising alarms about their deployment by Nigerian security agencies. Without ethical design and oversight, these tools risk exacerbating the very inequalities they claim to address.
Problems Between Data Subjects and Data Controllers
There is also a democratic angle to consider. In a country where civic awareness around digital rights remains low, the opacity of AI systems compounds the imbalance of power between data subjects and data controllers. Citizens often do not know what data is being collected, how it is being used, or how to challenge its misuse. While the NDPA addresses this asymmetry through transparency and accountability clauses, real-world enforcement will require the NDPC to be both technically sophisticated and politically independent. Otherwise, the law becomes a ceremonial shield of defence, not a functional sword of attack.
Ethics by Design in Nigeria must therefore, go beyond the courtroom and the codebase. It must include grassroots participation, inclusive innovation, and capacity-building across all sectors. It means inviting civil society organisations, digital rights activists, technologists, and vulnerable communities into the design of digital governance tools. It means creating AI systems that are not only efficient, but equitable; not only intelligent, but humane.
The question is no longer, whether Nigeria will use AI. It is whether AI in Nigeria, will respect the principles that define a democratic society: dignity, autonomy, justice and accountability. The NDPA provides legal scaffolding. Now is the time to build a moral architecture. The Ethics by Design framework offers Nigeria a rare opportunity to lead, not only in innovation, but in ethical innovation. And, in a world where technology increasingly shapes the human experience, there may be no more important challenge.
The Evolution of AI and Concerns Generated Thereby
AI has progressed from mere rule-based systems to machine learning and deep learning models, capable of autonomous decision-making. Applications range from healthcare diagnostics to autonomous vehicles, predictive policing, and financial algorithms. While AI enhances productivity, concerns arise over:
– Job displacement due to automation.
– Surveillance capitalism where personal data is exploited for profit.
– Algorithmic governance where AI influences public policy without sufficient oversight.
Conceptual Origins of AI
The conceptual origins of AI can be traced to the mid-20th century, when pioneering figures such as Alan Turing and John McCarthy began to explore the possibility of creating machines capable of simulating human intelligence. Turing’s seminal 1950 paper, “Computing Machinery and Intelligence,” posed the provocative question, “Can machines think?” – a question that laid the philosophical groundwork for modern AI research. McCarthy, who coined the term “artificial intelligence” in 1956, convened the historic Dartmouth Conference, widely considered the birth of AI as a formal field of inquiry.
Early Aspirations and Technological Milestones
Early AI efforts focused on symbolic logic, rule-based systems, and expert systems, which relied on hand-coded rules to simulate decision-making processes. These systems, while limited in scope, found application in fields such as medical diagnostics (for example, MYCIN) and chess-playing algorithms. The emergence of machine learning in the late 20th century – particularly supervised learning techniques – ushered in a new era in which machines could learn patterns from data, rather than rely solely on pre-programmed rules.
The exponential growth in computing power, availability of big data, and algorithmic innovation have since culminated in what many scholars refer to as the “AI revolution”. Notable developments include deep learning techniques powered by artificial neural networks, natural language processing exemplified by large language models (LLMs), and computer vision systems that rival or exceed human performance in specific domains.
From Automation to Autonomy
AI has transitioned from automating repetitive tasks to performing complex cognitive functions, previously thought to be the exclusive domain of humans. Self-driving cars, AI legal assistants, autonomous drones, and AI-generated art demonstrate the length and breadth of AI’s applications. As these systems grow in sophistication, they increasingly exhibit autonomy – the capacity to make decisions and take actions without direct human intervention. This shift raises profound questions about accountability, transparency, and control.
For example, autonomous weapons systems capable of selecting and engaging targets without human oversight, challenge existing norms under international humanitarian law (IHL). Similarly, AI systems deployed in judicial or parole decisions raise concerns about bias, fairness, and due process, especially when the logic behind decisions is opaque even to their developers – a phenomenon referred to as the “black box problem.” (To be continued)
THOUGHT FOR THE WEEK
“When I say, ‘I stand for equal rights’, I mean equal rights for all persons… from the moment of conception until natural death. I mean that I believe in the equal human dignity of all persons, no matter the ‘contribution’ they make to society.” (Abby Johnson)







