Latest Headlines
Addressing Data Privacy Concerns in Artificial Intelligence Systems: Regulatory Mechanisms in Nigeria
By Kazeem Adedeji
Artificial Intelligence (AI) is no longer a futuristic concept—it’s here, and it’s reshaping the fabric of everyday life in Nigeria. From streamlining banking operations to enhancing healthcare diagnostics, and from improving agricultural yields to automating public services, AI’s transformative potential is vast. But with this wave of innovation comes a cascade of concerns, chief among them: data privacy.
Nigeria, Africa’s largest economy and most populous country, stands at a pivotal point in determining how to protect the privacy of its citizens in a rapidly digitizing society. As AI systems continue to infiltrate sectors and services, they demand enormous amounts of data—often deeply personal. The question then becomes: how prepared is Nigeria to safeguard the digital identities of its citizens against the unique risks posed by AI?
Efforts to establish a legal backbone for data protection in Nigeria began in earnest in 2019 with the introduction of the Nigeria Data Protection Regulation (NDPR), issued by the National Information Technology Development Agency (NITDA). This regulation aimed to lay down foundational principles around data minimization, purpose limitation, and lawful processing. It was a laudable first step but fell short in addressing the intricacies of AI, such as automated decision-making and the infamous “black-box” algorithms that even their creators struggle to fully explain.
In 2023, the Nigerian government built upon the NDPR by passing the Nigeria Data Protection Act (NDPA), providing statutory authority to protect digital privacy. A significant improvement came with Section 41 of the NDPA, which touches on automated decision-making and profiling. The law recognizes the potential for algorithmic harm and seeks to place some checks on how decisions are made by machines. But while these legislative strides are notable, they remain underdeveloped and are often not supported by detailed guidelines, enforcement strategies, or institutional capacity.
The Nigeria Data Protection Bureau (NDPB), created in 2022, now spearheads enforcement and compliance efforts. It works alongside other agencies such as the Central Bank of Nigeria (CBN) and the Nigerian Communications Commission (NCC). But a closer look at their track records reveals a concerning reality. Between 2019 and 2023, only 27 regulatory investigations were conducted under the NDPR and NDPA—none specifically targeting AI-related violations. This data points to a glaring enforcement gap and questions the readiness of regulators to tackle privacy violations in the AI age.
One of the most troubling issues is the lack of algorithmic transparency in Nigeria’s legal framework. AI systems often operate as “black boxes,” making decisions in ways that are neither clear nor explainable to the public—or even regulators. The NDPA does not mandate transparency or explainability, leaving users vulnerable to systems that can deny services or make predictions about their lives with little accountability.
Beyond legislative gaps, institutional capacity remains a critical weakness. Internal assessments suggest that fewer than 8% of NDPB’s technical staff are equipped with the skills necessary to evaluate AI systems. In practice, this means the bureau struggles to conduct in-depth audits or enforce compliance, especially when dealing with complex AI models. Moreover, many regulatory bodies lack the computing infrastructure needed to test or monitor AI systems effectively, often relying on companies to self-report breaches or malfunctions. This approach is not only inefficient but also inherently flawed in a space where corporate interests can conflict with public rights.
Cross-border data flows present yet another challenge. AI systems are often trained and deployed across multiple jurisdictions, raising questions about where Nigerian citizens’ data goes and how it’s protected. Nigeria’s data localization requirements remain vague, and there’s little clarity on how foreign jurisdictions are vetted for data adequacy. In the age of global AI development, such regulatory blind spots leave Nigerian data exposed to abuse.
Meanwhile, public awareness around AI and privacy is strikingly low. Many Nigerians are unaware that their data is being harvested, let alone how to seek redress if it’s misused. Surveys by the Nigerian Artificial Intelligence Alliance reveal that most people do not understand their rights under existing data protection laws. Legal recourse, where available, is often inaccessible or too slow to be meaningful. For the average Nigerian, the idea of contesting a decision made by an AI system remains an abstract concept, buried beneath layers of legalese and bureaucracy.
When benchmarked against global best practices, Nigeria’s data protection regime clearly falls short. The European Union’s General Data Protection Regulation (GDPR) has become a global gold standard, offering robust safeguards against automated decision-making, complete with mandates for human oversight and detailed audit procedures. In Africa, South Africa’s Protection of Personal Information Act (POPIA) includes enforceable clauses on algorithmic accountability. Kenya’s law goes a step further by embedding transparency into its AI frameworks, while Rwanda has adopted a holistic approach by integrating AI governance into its national digital strategy.
Outside Africa, countries like Brazil and India offer models that Nigeria could adapt. Brazil’s General Data Protection Law includes provisions for data impact assessments, while India’s proposed legislation emphasizes cross-border data safeguards and rights-based protections in AI contexts. These examples highlight the importance of designing privacy laws that are proactive, not reactive.
So, what can Nigeria do to bridge this gap?
First, the NDPA needs to be amended to include mandatory transparency and explainability for AI algorithms. AI systems that significantly impact individuals—such as those used in hiring, lending, or law enforcement—should be categorized as high-risk and subject to stricter oversight. Citizens should have clear rights to challenge decisions made by AI and to demand human review.
Second, Nigeria must invest in the institutions that regulate data and technology. This means increasing funding to the NDPB, hiring AI specialists, and creating a multi-agency AI Ethics and Privacy Task Force. This task force could ensure that regulatory frameworks evolve alongside technological developments.
Third, the country needs to harmonize its regulatory architecture. The responsibilities of the NDPB, NITDA, CBN, and NCC often overlap, leading to confusion and duplication. A national AI governance framework should clarify jurisdiction and create a unified front for data privacy enforcement.
Fourth, infrastructure must not be overlooked. Nigeria should work with local universities and tech hubs to build tools that help regulators assess the privacy risks of AI systems. Regulatory sandboxes—controlled environments where new technologies can be tested under supervision—would also provide a safe space to understand how AI tools operate in real-world conditions.
Finally, public engagement is crucial. Educational campaigns can demystify AI and inform citizens about their rights. Collaboration with civil society and tech companies can drive co-regulation, ensuring that privacy isn’t just a box to check, but a shared priority. Nigeria should also be more active in international forums, sharing its experiences and learning from others as global AI standards evolve.
In the end, the success of Nigeria’s AI revolution depends not just on innovation but on trust. Without strong protections in place, public confidence in digital systems will wane, stalling progress. A rights-based, innovation-friendly approach to AI governance is not a luxury—it’s a necessity.
Nigeria has taken important steps, but more must be done. The country is well-positioned to lead in responsible AI deployment on the continent, but only if it moves swiftly to close existing legal, institutional, and technological gaps. The moment demands more than piecemeal reforms—it calls for a comprehensive vision of how AI and privacy can coexist in a digital future that benefits all Nigerians.







