Rising AI Risks Expose Weaknesses in Existing Governance Models — Omotayo Salako

As artificial intelligence continues to gain ground across financial services, healthcare, public administration and critical infrastructure, regulators and organisations are facing a growing challenge over how to govern systems that evolve faster than the rules designed to oversee them.

While AI promises efficiency, scale and innovation, its rapid adoption has exposed weaknesses in traditional governance and oversight frameworks, many of which were created for static, rule-based technologies rather than adaptive systems that learn and change in real time. Experts say the debate has moved beyond whether AI should be regulated to whether existing governance structures are capable of keeping up with how these technologies actually operate.

According to U.S.-based cybersecurity governance and IT risk specialist, Omotayo Fatimat Salako, the core issue lies in the mismatch between AI’s continuous evolution and the periodic controls commonly used to manage risk. “Most governance frameworks were designed for technologies that change slowly and predictably,” she said. “AI systems evolve constantly, which means risk can emerge long before traditional audits or reviews ever take place.”

Conventional assurance models typically rely on documentation, point-in-time testing and retrospective reviews. However, AI systems learn from data as they operate, creating what experts describe as an oversight gap, where accountability struggles to keep pace with innovation. The challenge is further complicated by the opaque nature of many AI models, often referred to as ‘black boxes’, making it difficult for organisations to clearly explain how decisions or outcomes are reached. This lack of transparency clashes with regulatory expectations around traceability and auditability.

Beyond transparency, AI introduces emerging governance risks that are difficult to manage with traditional tools. These include model drift, where systems change over time in ways that weaken original safeguards, and algorithmic bias, which can unintentionally reinforce inequality. There are also concerns around over-automation, particularly in high-impact decisions affecting individuals, markets or public services.

“There’s a growing tendency to treat AI outputs as inherently objective,” Salako noted. “That assumption can be dangerous if governance structures don’t require ongoing challenge, validation, and accountability.”

Third-party dependency has also emerged as a key risk, as many organisations rely on external AI vendors while having limited insight into how models are trained, tested or governed. This lack of visibility raises questions about responsibility and control when things go wrong.

In response, governments have begun to take action. Initiatives such as the European Union’s AI Act and evolving guidance from regulators in the United States and other regions signal increasing recognition that AI requires targeted oversight. However, experts warn that regulation alone may not be sufficient, as laws often lag behind technological change and compliance can become a box-ticking exercise.

“Having policies on paper doesn’t necessarily mean risks are being managed in practice,” Salako said. “Without mechanisms to monitor AI behaviour continuously, organisations may meet regulatory requirements while still being exposed to significant unseen risk.”

As a result, some governance professionals are advocating for a shift from periodic assurance to continuous oversight. This model embeds governance controls directly into AI systems, allowing for real-time monitoring, automated testing and accountability across the system’s lifecycle. Although adoption remains uneven, proponents argue that this approach better reflects the realities of AI-driven environments and requires closer collaboration between technology teams, risk professionals, legal advisers and ethics bodies.

The implications of weak AI governance extend beyond corporate risk. Automated systems are increasingly shaping access to credit, employment, healthcare and public services. When oversight fails, the consequences often spill into society, eroding public trust and deepening existing inequalities.

As Salako puts it, “the question society must ask is not just whether AI works, but whether its use is transparent, fair, and accountable to the people it affects.”

With AI forcing institutions to confront the limits of traditional governance, experts say the choices organisations make now may define the long-term legitimacy of these systems. Whether they pursue genuine reform or settle for minimal compliance could determine whether technological progress ultimately strengthens or undermines public trust in the digital age.

Related Articles