Latest Headlines
An AI Model Trained On Distress Cues Now Waits For Distress In Customer Support
Fadekemi Ajakaiye
An unlikely message came in at 2:47 a.m.: “Hey, sorry to have to bother…” Ninety seconds passed, then: “I just need to talk to someone.”
On the other end, a counsellor at the helpline receiving the transmission kept eyes fixed on the screen, careful not to miss a second.
Not long after, an artificial intelligence model was also watching, in a front-row seat, looking to spot the gaps, the tone, a change in pace, and the sudden shifts in mood that language models can now detect days before human experts would notice.
As programmed, the system flags such an exchange, and the human factor responsible for such conversations uses the details provided to identify the root of the outreach.
That AI model, tailored to detect distress in individuals who rarely state it directly, is now listening to service complaints in customer support interactions, in hopes of turning around what has recently been an underserved landscape. The crux of the matter lies in the patterns between the words.
Moore Dagogo-Hart, a developer at heart, didn’t initially set out to build customer service software. The system, which eventually became Martha AI, began as a crisis intervention system for Black Pride Canada, a platform supporting Black 2SLGBTQ+ communities, and was trained on emotionally-charged conversations, where what people don’t say matters as much as what they do.
They may apologise unnecessarily, buffer urgent requests with a calculated calm, or switch codes mid-conversation when frustration builds beyond what one dialect can convey.
At Zap Africa, a non-custodial crypto exchange he co-founded in 2022, customer messages had similar, if not more complex, buildups. One chat would open with a polite “Good morning, please I sent 250 USDT, but it’s still pending.” Time passes. The customer refreshes the app, checks their wallet, and watches the rate move.
In the suspense comes, “Hi, please can you check? I’ve sent proof.” Hours later, it climaxes into: “Abeg this thing sef wetin dey happen? I no wan hear story o.” To a chatbot, it appears to be a language change. For anyone who understands Nigerian customers, it’s notable: patience is gone, trust is slipping, and the user no longer feels heard.
The model was tested on hundreds of thousands of Zap Africa customer conversations. It reportedly reduced response times from hours to just under a minute, but more revealing was the system’s eventual ability to learn when not to respond at all: when it recognises that automation would only make things worse and hands the reins to a human expert.
Big data’s big deal
The crisis-intervention data used to train the model were anonymised and drawn from aggregated, publicly cleared datasets. Dagogo-Hart stresses that no individual conversations or identities were accessed.
“We train on abstract behavioural signals, not personal stories,” he says. Still, consent remains a live issue. While the data met current privacy standards, he acknowledges the industry must be more transparent about how human distress is repurposed for AI training.
The majority of respondents from a 2023 Pew Research survey expressed concerns about AI systems trained on sensitive or emotional conversations. What counts as “personal data” when the pattern itself becomes the product?
When U.S-based Securus Technologies built AI models on several years of prison phone calls, inmates were notified that conversations were recorded. Data gathered from those recordings now helps an AI model flag “contemplated” crimes. Crisis lines constitute a distinct segment, yet the mechanism is similar: emotion-laden data from vulnerable contexts are repurposed to detect similar patterns.
So far, only European regulators have started grappling with the questions. Under the European Union’s AI Act, systems inferring emotional states are treated with heightened scrutiny in areas such as employment and education. Customer service, however, sits in a grey zone worldwide since deployment outpaces regulation.
That doesn’t take away from the curious case of AI agents sitting at the centre of the highs and lows of customer conversations. If they learn that certain communication patterns predict customer churn or support costs, does that knowledge remain neutral?
The economics of understanding
Nigerian fintechs consistently report that failed and delayed transactions account for a hefty portion of customer support tickets. Users abandon apps after repeated unresolved interactions, contributing to high dormancy rates — three out of five mobile money accounts in Nigeria are inactive.
Zap Africa has processed over $17 million in transactions, serving users who can’t afford transaction failures or extended wait times. For them, a system that interprets their mannerisms as signals to prioritise can make the difference between using the service and throwing the soap out with the bathwater.
That efficiency comes from training data from some of the most unlikely conversations. As similar systems expand into other sectors, the question of who benefits from this understanding remains unanswered.
The FCC’s October 2024 reversal on prison telecom fees shows how this pans out when regulators catch up. The commission permitted the passing of AI development costs to inmates, explicitly citing “advanced AI and machine learning” as justification. This elicited dissent, that “law enforcement should foot the bill, not the families of the people serving time.”
It’s unclear if there is a similar framework for customer service AI. The system doesn’t monitor people against their will or predict criminal intent, but it does analyse emotional states using training data from people who didn’t imagine their conversations would teach a machine to read frustration in wider commerce.
Other companies are developing “small AI” for applications such as health triage and banking alerts. Early results suggest that users prefer interactions that align with their emotional rhythm. The technology works; only the ethical issues remain unresolved.
As emotional AI moves to pole position, the tension won’t remain on whether it can understand what people don’t say outright. The question is whether the people whose most vulnerable moments trained those systems will have any say in how they are used—and whether the rest of us will know when we’re talking to a machine that learned empathy from someone else’s distress.






