Latest Headlines
WHO IS RESPONSIBLE WHEN AI FAILS?
The future of AI must not be built on immunity, but on integrity, argues SONNY IROCHE
The recent policy direction emerging from Washington, suggesting that the United States should prioritise innovation while shielding artificial intelligence companies from legal liability, has triggered an important global debate. The argument, at its core, is simple: do not stifle innovation. But beneath that simplicity lies a far more complex and troubling question, who bears responsibility when AI systems cause harm?
As an AI strategist working across banking and finance, governance, policy, and enterprise transformation, I find the current posture deeply instructive, not just for the United States, but for Nigeria and the rest of Africa and other emerging regions that are still shaping their own AI futures.
The White House recommendations, as interpreted by critics, appear to lean toward a familiar model in technological revolutions: protect the innovators first, regulate later. This was the approach taken during the early days of the internet and social media. The consequences are now well documented, misinformation, data exploitation, algorithmic bias, and the erosion of public trust.
We must be careful not to repeat history; as innovation without accountability is a strategic risk.
Artificial Intelligence is not just another technology. It is a decision-making system, one that increasingly influences finance, healthcare, law enforcement, education, and national security.
When an AI system: denies a loan,
flags a transaction as fraudulent,
misdiagnoses a patient, or
influences democratic processes,
it is not merely “software at work.” It is power being exercised.To shield AI companies from legal accountability in such contexts is to create what I would describe as: “asymmetric responsibility”, where impact is societal, but liability is optional. This is neither sustainable nor ethical.
The Missing Middle: Responsible Innovation
There is a false dichotomy often presented in policy circles: Regulate too early means innovation dies; regulate too late means harm proliferates.
The real answer lies in what I call “Responsible Acceleration.”
This means: encouraging innovation
while embedding governance from the outset.
Frameworks already exist to guide this balance. The UNESCO AI Readiness Assessment Methodology (RAM), in which I have had the privilege to contribute, as a member of UNESCO Technical Working Group on RAM, emphasises: human rights,
accountability, transparency, institutional capacity.
Similarly, the Oxford-style AI readiness frameworks stress that capability must precede deployment. What is striking in the current U.S. posture is not the desire to innovate, that is expected, but the relative de-emphasis on enforceable accountability mechanisms.
Legal Immunity Today, Systemic Risk Tomorrow. History teaches us that early immunity often leads to later overcorrection.
If AI companies are broadly shielded from liability: There is reduced incentive to invest in safety, risk is externalised to society,
trust in AI systems declines.
And when trust declines, two things happen: First, adoption slows and secondly regulation becomes reactionary and heavy-handed. Ironically, the very innovation policymakers seek to protect becomes constrained.
What implications do they have for Africa, one may ask? Africa must pay close attention. We are not yet locked into any one regulatory model. This gives us a rare advantage, the ability to design AI governance correctly from the start.
If we simply import models that prioritise:
speed over safety, scale over accountability,
we risk building fragile digital economies.
Nigeria, South Africa, Kenya, Egypt, Zimbabwe, and Morocco, countries already advancing AI strategies, must instead adopt a more balanced approach: encourage innovation, yes. But embed governance, from day one
This includes: clear liability frameworks,
AI risk classification systems,
Board-level oversight in enterprises,
National AI readiness assessments.
The Corporate Dimension: Lessons for Institutions. For organisations, such as financial institutions, insurers, and fintechs, the implications are immediate.
No serious board should accept a position where: “The AI system made the decision, therefore no one is accountable.”
That is not governance. That is abdication.
Boards must insist on: human-in-the-loop accountability, audit trails for AI decisions,
defined risk thresholds, vendor accountability.
In my advisory work, I emphasise a simple principle: “If you cannot explain it, you should not deploy it.”
A Call for Strategic Balance
The United States remains the global leader in AI innovation. Its policy choices will influence the rest of the world.
But leadership is not only about speed, it is about direction. A national AI policy that underweights risk and overprotects corporate actors risks creating: regulatory backlash, societal distrust, long-term systemic instability
The future of AI must not be built on immunity, but on integrity.
The debate is not between innovation and regulation. It is between: short-term acceleration, and long-term sustainability.
We must choose wisely. Because in artificial intelligence, unlike previous technologies, the stakes are higher. AI does not just amplify human capability, it can also amplify human error, bias, and harm at unprecedented scale.
And when that happens, the question will not be: “Did we innovate fast enough?” But rather, did we govern wisely enough?”






