Latest Headlines
Naiho: Ungoverned AI Is Quietly Scaling Risk in Nigeria
Dr. Henry Naiho, a Doctor of Philosophy (PhD) in Data & Cybersecurity, and Doctor of Business Administration (DBA) in Executive Leadership, is an authority
in AI Governance and Enterprise Risk with over 26 years of executive and advisory experience spanning telecommunications, enterprise systems, cybersecurity, and large-scale digital transformation across Africa and global markets. He works with boards of directors, executive leadership, and regulators at moments when decisions carry strategic, regulatory, and reputational consequences, helping institutions govern AI and complex digital systems with clear accountability, and defensible oversight.
In this interview, he speaks about ungoverned AI and its scaling risk in Nigeria.
You are widely described as an AI Governance and Enterprise Risk Authority. How did your 26+ years across multiple sectors shape this positioning?
My positioning was shaped by working in sectors where failure has immediate, visible consequences— telecommunications outages that disrupt national connectivity, banking system failures that freeze customer access to funds, construction and manufacturing breakdowns that compromise safety and delivery timelines, government systems that affect citizens’ rights, and healthcare platforms where errors can affect human life. Across these sectors, I observed a consistent pattern: when systems fail, the public does not ask which technology failed — they ask who was responsible. That reality forced me to think beyond delivery and into governance, accountability, and decision ownership for instance a
Nation-wide network upgrade improves capacity but introduces intermittent service disruptions.
Engineers troubleshoot, but regulators, customers, and the media want to know: Who approved the change? What safeguards were in place? Why was the impact not anticipated? That moment is not technical — it is governance. Over time, these experiences shaped a governance-first approach: technology must serve institutions, and institutions must remain accountable for outcomes.
What are the key roles AI plays in reshaping organisational decisions?
AI is reshaping organisational decision-making not by replacing leadership, but by changing the quality, speed, and defensibility of decisions. In Nigeria’s operating environment—characterised by market volatility, infrastructure constraints, regulatory scrutiny, and fraud risk—AI plays five critical roles. Signal extraction from complexity. Most organisations already have data; the problem is meaning,
not volume. AI identifies patterns, correlations, and anomalies across transactions, networks, operations, and customer behaviour that humans cannot see at scale. Early warning and predictive insight AI shifts decision-making from reactive to anticipatory—forecasting failures, fraud surges, demand shocks, or
operational stress before they crystallise into losses. Decision consistency at scale, AI enables repeatabledecision logic in high-volume environments (transactions, alerts, service incidents), reducing arbitrary or emotionally driven actions. Trade-off visibility good decisions are not about “best answers” but explicit trade-offs—speed vs control, growth vs risk, automation vs fairness. AI helps model options, but humans must decide which trade-off to accept.Evidence creation for accountabilityAs scrutiny increases, organisations must prove why a decision was taken. AI-assisted decisions require governance—clear records of data used, assumptions accepted, and human approval. Lets look at the critical roles AI plays
in different sectors of our economic endevours, In the telecom space AI analyses network telemetry and predicts congestion risk before public holidays; executives approve pre-emptive capacity reallocation, avoiding mass service complaints.AI flags repeated micro-failures across base stations linked to power instability; maintenance is scheduled before a nationwide outage occurs. In the Banking and Financial Services AI detects early fraud patterns across mobile transfers before losses spike; management escalates thresholds with documented approval AI identifies abnormal transaction velocity tied to mule accounts; human investigators intervene selectively, reducing false positives .In manufacturing AI predicts bearing failure on critical equipment, preventing unplanned downtime that could halt production for days. AI spots rising defect patterns early in a batch process, allowing corrective action before large-scale scrap occurs. In the Construction ,AI detects schedule slippage patterns across subcontractors; project leadership intervenes before cost overruns compound.AI flags safety-risk indicators (weather, fatigue, workforce changes), prompting preventive safety controls. In Healthcare AI predicts patient deterioration risks; clinicians intervene earlier, improving outcomes without surrendering clinical authority. AI highlights medication error risk patterns, triggering process reviews.
In Government procurements, AI identifies procurement bid-rigging signals; officials initiate investigations with documented decision trails AI forecasts service delivery bottlenecks ahead of elections, allowing proactive planning.
How does AI influence governance, especially at board level?
AI fundamentally alters governance because it introduces scalable decision influence. A single algorithmic change can affect millions of customers or citizens instantly. This elevates AI from an IT issue to a board-level governance issue. Boards must govern AI across four dimensions: Accountability AI cannot be accountable. Boards must ensure named executives remain responsible for decisions influenced by AI. Auditability, boards must demand traceability: what data informed the recommendation, what assumptions were accepted, and who approved the final decision. Risk oversight AI introduces new risks model drift, bias, cyber manipulation, data integrity failures. These are enterprise risks, not technical issues. Decision rights Boards must define thresholds—what AI can assist operationally, what requires executive sign-off, and what requires board visibility. Real-world governance lessons. Globally, multiple public-sector AI systems have been suspended or challenged because automated decisions lacked transparency and human oversight. These cases demonstrate that ungoverned AI erodes trust faster than it creates efficiency. For instance in the telecoms sector the board requires executive sign-off for AI-recommended nationwide parameter changes. AI optimization proposals are reviewed against customer-impact risk thresholds. In banking Board mandates that AI-flagged account freezes above a threshold require senior approval.AI credit decisions must produce
explainable outputs for audit.Manufacturing the board oversees AI-driven quality controls affecting regulatory compliance. AI-recommended supplier changes are reviewed for ESG risk. In Construction AI cost-forecasting models are governed under capital-approval frameworks. Safety-risk AI outputs trigger mandatory management escalation. In the Healthcare sector one of the most sensitive sector, globally its considered the wealth of every nation The board ensures AI diagnostic support tools are advisory only. Audit committees review AI-assisted clinical incidents. The government must take an AI welfare screening decisions that will have appeal mechanisms. Set up policy committees oversee to AI- based citizen risk scoring.
With the current high rate of financial crimes in Nigeria, how can AI help mitigate this trend?
Nigeria’s financial crime challenge is structural and systemic. Reports show fraud losses exceeding ₦13 billion annually, with cybercrime costing the economy hundreds of billions of naira over time. AI is essential—but only if governed properly. How AI helps (when governed)Advanced pattern detection –
AI identifies fraud patterns humans miss: mule networks, synthetic identities, insider-enabled schemes. Real-time intervention – Transactions are assessed in milliseconds, reducing loss windows. Alert prioritisation – AI reduces false positives, allowing teams to focus on high-risk cases. Regulatory defensibility Documented AI-assisted decisions protect institutions during audits and investigations. One of the key factor is ignoring the key governance warning, many fraud losses occur not because AI failed—but because alerts were ignored, thresholds overridden, or accountability was unclear. For instance lets situate them sectorally: Banking, AI detects coordinated mule activity; bank escalates under a documented fraud-decision framework. AI identifies abnormal FX transaction behaviour; senior risk officers approve intervention. Telecoms, AI flags SIM-swap patterns linked to fraud rings; telco collaborates with banks and law enforcement. AI predicts SMS-based phishing surges; preventative customer warnings are issued. E-commerce, AI detects account-takeover attempts during sales campaigns. AI blocks coordinated refund abuse with human review. In government AI flags revenue leakage patterns; audit teams investigates AI identifies abnormal benefit claims linked to organised fraud.
How can AI help in swift profiling of online transactions to stop fraudulent e-business activity?
AI enables real-time, risk-based decisioning, replacing static rules that criminals easily bypass. For instance core capabilities, behavioural profiling (how users act, not just who they claim to be) Device and network fingerprinting, transaction velocity analysisFraud-ring detection via network analysis. Critical governance point,automated blocking without explanation creates legal and reputational risk. AI must support escalation and review, not silent exclusion. In Banking and Fintech AI blocks suspicious transfers’ mid-flow pending review.AI scores merchant risk dynamically during on boarding. In retail and e-commerce ,AI detects bot-driven checkout abuse. AI flags chargeback-prone customers. In government portals, AI identifies abnormal tax filing behaviour.AI detects fake service-access patterns.
In the telecoms space, how can AI help troubleshoot network problems before they occur?
Telecom networks generate vast operational data. AI converts this into predictive resilience. Key applications, predictive maintenance – Identifying equipment failure risks early. Anomaly detection – Spotting unusual traffic, latency, or signalling behaviour. Root-cause acceleration – Correlating faults across network layers. Customer-impact forecasting – Prioritising fixes based on service exposures.
Studies in network operations show predictive maintenance can reduce downtime by 30–50 per cent and cut operational costs significantly. For instance in Telecoms operation AI predicts power-related base- station failures ahead of storms. AI forecasts congestion from major events and recommends pre-emptive
optimisation. Emergency services AI ensures network resilience for emergency communications. AI prioritises infrastructure protection during national events.
Why do AI and digital transformation failures in Nigeria usually reflect governance breakdowns rather than technology limitations?
Because Nigerian organisations operate in high-pressure environments — unstable infrastructure, evolving regulation, security risks, and intense competition — governance must be stronger, not weaker. Failures typically arise from: unclear accountability, weak oversight, no assurance testing, no escalation triggers, poor documentation. For example a digital identity or benefits platform automates
approvals. Citizens are denied services without explanation. Public backlash follows. The issue is not software accuracy — it is the absence of: appeal mechanisms, accountable owners, audit trails, governance oversight. Technology executes decisions; governance determines whether those decisions are defensible.
What delivery mistakes do Nigerian executives repeatedly underestimate when deploying AI and digital systems?
Common mistakes across sectors include: Poor data governance, Over-reliance on vendors, Lack of operational readiness, No monitoring for drift, Weak cybersecurity integration. For instance a construction firm deploys digital project controls and automation. Data is inconsistent across sites, leading to wrong
forecasts and delays. The issue isn’t the software — it’s lack of governance over data quality, accountability, and change control. Delivery succeeds only when governance supports execution.
What risks arise when AI systems are outsourced or imported into Nigeria?
These risk are iminent, because our Nigeria environmtal and behavioural realities were not considered, these are the key risks ,opaque decision logic, data sovereignty issues, cultural and contextual bias, delayed incident response, accountability gaps. For example, a fintech imports a foreign AI credit model. It performs poorly on local customer profiles, excluding legitimate borrowers. When challenged, the firm cannot explain decisions. Regulators hold the institution accountable — not the vendor. because outsourcing does not outsource responsibility.
How will your doctoral research areas inform governance of real-time AI decisions?
My work emphasizes that systems operating in real time must be governed for: robustness under stress, adaptability without losing control, accountability for outcomes, auditability after the fact. For instance in Healthcare and Banking sector
An AI blocks transactions or prioritises patients automatically. Governance must define: acceptable error thresholds, escalation rules, remediation timelines, evidence retention. This is how research becomes governance capability.
What must Nigerian boards and executives do now to ensure AI strengthens long-term value?
Three actions: Establish board-level AI governance. Integrate AI into enterprise risk management. Make defensibility a condition for scale.for example lets take Manufacturing versus Banking: Two firms deploy AI. One prioritises speed and cost only; it faces public backlash and regulatory scrutiny. The other builds governance, assurance, and accountability; it earns trust and long- term advantage. In Nigeria, sustainable value belongs to institutions that govern AI as a fiduciary responsibility, not as a technical project.
How will your multi-AI agent systems help act as a “Digital Sentry” against cyber telecom threats and attackers?
A modern telecom environment is one of the most attacked ecosystems in any country because it sits at the centre of identity, payments, communications, critical infrastructure, and national security. Attackers target telcos for mass data exposure, SIM-swap enablement, signalling abuse, DDoS, ransomware,
upply-chain compromise, and insider misuse. The role of a multi-AI agent system is not to “chase criminals online,” but to operate as a continuous, coordinated defence layer that: Detects weak signals early (before incidents become outages or breaches, Correlates across silos (network + IT + apps + identity + fraud + SOC) Automates triage and containment (SOAR actions with human approval
gates) Produces an audit-ready decision trail (defensible to regulators, auditors, and boards) Continuously learns (model drift monitoring + controlled updates)Why this is urgent (telecom threat reality)Industry reporting highlights that DDoS and ransomware remain among the most reported/high-impact forms of attack affecting telecom and critical infrastructure. GSMA+1 GSMA’s Mobile Telecommunications Security Landscape reports recurring telecom threats tracked across the sector and emphasises the industry’s need for stronger security posture and governance. GSMA+1 Telecom breaches and cyber incidents have continued to surface globally; in Africa, for example, major South African telecom
incidents have involved alleged data exposure/leakage. The Record from Recorded Future .what the multi-agent system actually does (in plain terms) Think of it as specialised AI agents working like a disciplined security team: Threat Signal Collector, Pulls signals from:SIEM logs, firewall/IDS, endpoint telecom network telemetry (RAN/core/performance) IAM events, privileged access fraud systems (SIM swap indicators, unusual KYC changes) OSINT/dark web mentions (brand/domain impersonation) Correlation & Pattern Agent, Links “small” indicators into one story: suspicious logins + config changes
+ abnormal traffic spikes, SIM swap activity + unusual mobile money transfers + device fingerprint mismatch repeated failed auth + new admin account + sudden outbound data flows and many more that will be too technical for our readers. But your system must operate under these rules: Purpose limitation: defend systems, not “hunt people.” Human accountability: high-impact actions require named
approval. Auditability: every recommendation/action is logged with rationale. Privacy controls: minimisation, retention limits, role-based access.Model governance: drift monitoring, controlled updates, periodic review..Practical KPIs for robust, adaptable and resilient AI system:Mean Time to
Detect (MTTD), Mean Time to Respond (MTTR), % incidents auto-triaged vs escalated. False positive reduction rate Availability protected (minutes of downtime avoided)Fraud-loss reduction attributable to early containment Compliance readiness score (completeness of decision dossiers
What are your Final words?
AI does not replace leadership. It raises the standard of leadership.
Organisations that succeed with AI are not the most automated—but the most governed.
The future belongs to institutions that combine AI insight with human judgment, accountability, and defensible decision-making. Nigerian institutions must move beyond AI adoption to AI accountability.
The future belongs to organisations that combine intelligent systems with human judgment, clear accountability, and defensible decision-making. Dr. Henry Naiho can be reached on https://www.linkedin.com/in/henry-naiho/







