Latest Headlines
AI is Supercharging a Global Cyber Fraud Crisis – But It Could Also Solve It
Jeremy Jurgens
Ask most people to name AI’s defining moment and they will likely point to ChatGPT’s public launch in November 2022. Yet, a different development involving the technology could have a bigger impact on people’s everyday lives – the use of AI to carry out cyberattacks.
AI is now widely seen as the biggest threat to online security in the year ahead. Reports of hackers bypassing guardrails to launch cyberattacks on major companies cannot be ignored, prompting governments to brace for a surge in the scale and severity of AI-enabled cyberattacks.
Cyber-enabled fraud is already widespread: 73% of respondents to the World Economic Forum’s latest Global Cybersecurity Outlook say they or someone in their network was personally affected in 2025. Many CEOs now rank it as the top cybersecurity threat – overtaking ransomware – with 77% of respondents reporting an increase in incidents over the past year.
What is happening is no longer a niche threat. It is a societal crisis.
Increasing Fear of Fraud
Already the world’s most pervasive cyber threat, fraud now puts everyone at a daily risk – from politicians to pensioners, employers to employees.
AI models are capable of creating synthetic voices that are being used in phishing attacks via phone. One UK engineering company was scammed out of $25 million when this tactic was deployed on a video call. If the bosses of some of the world’s most sophisticated and resilient enterprises are falling victim to these crimes, how worried should smaller businesses be about issues such as email fraud, fake invoices and identity theft? What about the risks for everyday citizens, particularly vulnerable groups such as the elderly?
Fraud has become the connective tissue of cyber risk, affecting households, corporations, and national economies simultaneously. One scam email can lead to data breaches that cause a breakdown in a company’s operations, setting off a chain reaction that can ripple through supply chains and across borders, denting not just bottom lines, but trust in digital and international systems.
Using AI Against Itself
AI’s potential to automate cybercrime may only be matched by its capacity to prevent it. Machine learning algorithms can detect fraud, for example, in banking, but they address the act, not the intent, especially as AI has lowered the cost of deception while increasing its credibility.
Recent headline-grabbing cases illustrate the stakes. A deepfake video showed an Irish presidential candidate falsely announcing her withdrawal from the election campaign, while Indonesian citizens were scammed by an Instagram video appearing to show the country’s president directing people to a WhatsApp number to receive aid.
Yet fraud does not only strike through high-profile incidents. smaller-scale scams occur constantly, and their prevalence is rising across all economies. The Forum’s research show that 82% of people in sub-Saharan Africa and 79% in North America have been impacted or know someone who has, illustrating how fraud has become a daily background risk of digital life.
AI is enabling rapid, tailored content creation, allowing criminals to scale and personalize scams with unprecedented efficiency. This should serve as both a warning and a wake-up call, highlighting the need for equally advanced AI-enabled detection, authentication, and monitoring tools.
Societal Norms Under Threat
What happens when people can no longer trust not just the text messages and emails, but the voices they hear and the faces they see on their screens? As cybercrime scales, this risk becomes real – threatening not only people’s finances, reputations, and businesses, but the trust that underpins the foundations of modern society.
Fraud existed long before AI, but the risks are now intensifying. Economic stability is under threat, with entire sectors being put at risk by financial losses. A string of cyberattacks on UK food retailers in 2025 left one saying its profits had been almost completely eradicated, while a cyberattack on Jaguar Land Rover was the most costly in British history, knocking $2.6 billion off the UK economy.
Democratic processes are vulnerable too. As the Irish deepfake incident shows, if public discourse can be manipulated so easily and at scale, the legitimacy of election campaigns – and of the votes themselves – comes into question. Once trust erodes, public engagement will evaporate.
Systemic Defences for Systemic Risks
Today’s cyber defences are not keeping pace with the accelerating speed and sophistication of cyberattacks, but that does not have to remain the case. The Global Cybersecurity Outlook identifies three main obstacles to better cyber defences: fragmented regulation across borders, insufficient intelligence sharing, and a lack of cybersecurity capacity among small and medium-sized enterprises, with 46% reporting critical skills shortages.
Recognition of the urgency is beginning to take shape. Initiatives such as the upcoming UN and INTERPOL’s Global Fraud Summit in March sign a shift towards more coordinated international action to prevent cybercrime.
Protecting individuals also requires action at the human, infrastructural and technological levels – from digital safety education and stronger identity verification and domain oversight to AI-enabled screening that flags fraud before harm occurs.
Isolated actions will not be enough. As fraud becomes systemic, the response must be systemic too. This will require coordinated collaboration on a global scale, bringing governments, industry leaders and civil society together to act across borders rather than just within them. Only in this way can they strengthen their collective capacity to prevent, protect against, and mitigate cyber-enabled fraud across the digital ecosystem.
• Jurgens is Managing Director, World Economic Forum







