Latest Headlines
Akinlolu: The Missing Link in Nigeria’s Immunization Efforts
By Rebecca Ejifoma
For decades, Nigeria’s immunization strategy has been defined by logistics, yet millions of children remained unreached, with over two million zero-dose children as of last year. At the height of the COVID-19 vaccination rollout, a different question emerged: what if the constraint was not infrastructure, but belief? In this interview, Rebecca Ejifoma speaks with health data strategy expert and health information management consultant, Oluwamisimi Akinlolu, who brings over four years of experience across health technology, public health project management, and data analytics. She reflects on the early deployment of AI-supported social listening tools to track sentiment and inform targeted communication strategies.
Nigeria has invested heavily in vaccine supply chains and cold storage for decades. Why hasn’t that been enough to close the immunization gap?
Because the bottleneck was never really the vaccine itself. As of 2021, Nigeria had approximately 2.1 million zero-dose children the third highest globally — despite having functional cold chains and a long-running national immunization programme.
What those systems struggle to address is the human decision not to vaccinate. Whether that decision is shaped by rumours circulating on WhatsApp, distrust of government health workers, or the influence of community leaders, supply alone cannot resolve it.
What became increasingly clear during this period was that a missing variable was behavioural intelligence: a more immediate understanding of why communities hesitate, and how those reasons differ across contexts.
What specific drivers of hesitancy were most visible at the time?
They varied widely, and that variation was part of the difficulty. In parts of northwest Nigeria, studies around 2020 suggested that a significant share of caregivers cited safety rumours, often spread through informal networks like WhatsApp and local word-of-mouth. In the northeast, the challenge appeared more structural, with lower levels of trust in government health recommendations shaped by historical experiences.
Historical precedents also remained relevant. The 2003 Kano fatwa against the oral polio vaccine continued to serve as a reference point for how quickly community sentiment could influence uptake at scale.
At the same time, emerging evidence suggested that even isolated reports of adverse events could have outsized effects on community confidence. These dynamics were not uniform, and they were not easily captured through routine reporting systems, which were often aggregated and delayed.
What did AI-driven social listening actually look like in 2021?
At that point, social listening was not entirely new, but the application of machine learning to public health communication in Nigeria was still developing.
The systems we worked on used natural language processing to analyze sentiment and detect recurring themes across large volumes of data. A key part of the work involved adapting these models to Nigerian contexts including Pidgin, Hausa, and Yoruba; because many existing tools were trained primarily on English-language datasets.
Topic modelling techniques helped surface recurring concerns as they appeared, while basic network mapping provided some visibility into how narratives were spreading. In some cases, geospatial tagging allowed insights to be linked to specific states or local government areas. It wasn’t a perfect system, but it offered a more immediate and layered view of public sentiment than traditional tools.
How did you deal with the fact that many hesitant populations were not highly active online?
That was one of the key limitations we had to work around. Social media alone could not provide a complete picture, particularly in rural or lower-connectivity areas.
To address this, we began incorporating additional data sources where possible, including radio transcripts, community health worker reports, and call center logs. Radio, in particular, remained a major source of information in many communities. Even with these additions, coverage was not comprehensive, but combining multiple sources helped reduce some of the bias that would have come from relying on digital platforms alone.
How were these insights used in practice during the COVID-19 rollout?
The goal was to move from broad, generalized messaging to more context-specific communication. This work was embedded directly within the Federal Ministry of Health and the National Primary Health Care Development Agency — the NPHCDA. I was not operating at arm’s length; I was inside the system, working alongside senior NPHCDA leadership and FMoH technical teams, providing the analytical foundation for communication and vaccination strategy as it was being designed and adjusted in real time.
As data was aggregated and analyzed, it became clearer that concerns differed significantly across regions. In Lagos, for example, there was noticeable anxiety linked to international reporting on certain COVID-19 vaccines and blood clots. In some northern states, concerns were more closely tied to institutional trust and religious framing.
These distinctions informed how communication teams approached messaging, but what made the approach effective at scale was the cross-functional coordination. I worked across the Advocacy, Communication, and Social Mobilization — or ACSM — groups spanning all 36 states plus the Federal Capital Territory. Each state has its own ACSM structure with communication officers and social mobilizers.
I developed a national communications toolkit grounded in real-time behavioural data and led training for ACSM officers across the country, translating machine learning outputs into state-specific communication briefs that non-technical officers could act on directly. Rather than treating hesitancy as a single issue, there was now a systematic, data-driven effort to tailor responses by state, by concern type, and by channel — coordinated through a unified framework that connected 37 state-level ACSM teams to a single behavioural intelligence pipeline.
You mentioned being embedded within the NPHCDA. Can you describe the specific institutional structures you helped build during that period?
Two structures were particularly important. The first was the country’s Evidence Generation Task Team, which served as the NPHCDA’s mechanism for ensuring that operational decisions during the COVID-19 response were grounded in data rather than assumptions. I supported this task team by providing the analytical outputs from the social listening system and the broader data infrastructure ensuring that the evidence base the task team relied on was current, triangulated across multiple sources, and behaviourally informed rather than limited to epidemiological reporting alone.
The second was the COVID-19 Rapid Response Immunization and Communication Centre — the CRICC — which I helped establish within the NPHCDA. The CRICC served as the nerve center where social listening data, ACSM field reports, and vaccination operational data converged into coordinated, rapid-cycle communication decisions.
When the system detected a surge in a particular misinformation narrative in a specific set of states, the CRICC was the mechanism that activated the targeted response deploying differentiated messaging through the right channels in the right local government areas within days, not weeks. Setting up the CRICC was critical because it gave the social listening infrastructure an institutional home inside the NPHCDA, which meant the approach could persist beyond any single campaign or funding cycle.
Did the system help with early detection of misinformation?
To an extent, yes. Compared to traditional reporting cycles, which could take weeks, these tools allowed for earlier signals sometimes within days when particular narratives began to gain traction.
In pilot contexts, this created a small but meaningful window for response, allowing communication teams to address concerns before they became widespread. Through the CRICC, we were able to formalize this into a rapid-response cycle that connected real-time behavioural signals to state-level ACSM teams and the Evidence Generation Task Team simultaneously. It wasn’t always consistent, but it demonstrated what might be possible with more mature systems.
You have spoken about different types of hesitant caregivers. How did that shape think at the time?
It reinforced the idea that hesitancy is not a single category. Some caregivers were cautious but open to reassurance, others were more firmly resistant, and many-faced practical barriers to access. Recognizing these differences helped shift conversations away from one-size-fits-all messaging, even if fully segmented interventions were still a work in progress.
What kind of outcomes were visible during this period?
Some early indicators suggested improvements in how communication strategies aligned with public concerns, particularly in areas where data was actively being used to inform messaging through the ACSM structures and the CRICC.
There were also periods such as early 2022 where vaccination rates increased significantly. Across the period when the social listening system, the CRICC, the ACSM coordination, and the mass vaccination campaigns were all operating together, vaccine uptake ramped up by more than 55%. This reflected the integrated approach, behavioural intelligence informing communication strategy, which in turn informed campaign targeting, which in turn drove the logistics of where and when to deploy vaccination teams across all 36 states and the FCT.
It would be an overstatement to attribute these changes to any single intervention, but the role of more responsive, data-informed communication, coordinated through the institutional structures we built at the NPHCDA, became increasingly evident. The mass vaccination campaigns were running simultaneously with the analytics work, and the two reinforced each other: the campaigns generated field data that fed the listening system, and the listening system generated insights that reshaped how the campaigns were targeted.
What did this work suggest for the future of health systems, as of 2021–2022?
It pointed toward a few emerging shifts. First, the need for more timely data; systems that could capture changes in sentiment as they happened, rather than weeks later. Second, the value of more targeted communication, even if the tools for doing this were still developing. And third, the importance of grounding interventions in community feedback, rather than relying solely on top-down assumptions. At the time, these were still evolving ideas, but they were beginning to shape how programmes thought about engagement.
What were the main constraints to scaling this approach then?
Several. Data privacy considerations were important and still being worked through. Infrastructure gaps limited coverage in some areas. There was also limited data science capacity within many health systems, which affected how easily insights could be translated into action.
And finally, integration remained a challenge. Linking these insights with existing systems like DHIS2 was not always straightforward. These constraints meant that while the approach showed promise, scaling it required both technical and institutional adjustments.







