It could have happened to anyone. As luck would have it, it happened to an employee at Luno, the SA crypto exchange. In mid-2024, according to news site WeeTracker, the employee received a WhatsApp voice note message from a senior executive, requesting a money transfer. The message was fake – a deepfake, created by fraudsters employing AI tools using realistic voice patterns that mimicked the senior exec. Luno was lucky: the employee flagged the threat and the fraud was averted.
Arup wasn’t so lucky. Two months earlier, the British multinational design and engineering company lost $25 million when one of its Hong Kong employees fell for a similar deepfake scam.
Stories such as Luno’s and Arup’s are becoming increasingly common, as fraudsters – both from outside the organisation and from within – use AI tools to enhance their cybercriminal activities. SA remains a global cybercrime hot spot, ranking sixth globally for victims per million internet users in 2023.
‘South Africa faces a perfect storm of fraud challenges that impact the entire breadth of the economy,’ says Sameer Kumandan, MD of data aggregation platform SearchWorks. ‘From digital identity scams that affect consumers to complex procurement fraud that places organisations at risk, businesses need to adopt proactive strategies to mitigate financial and reputational damage as a matter of urgency.’
Kumandan points to payment and procurement fraud as emerging threats, alongside identity theft and social engineering scams. ‘These are often driven by a combination of cunning tactics by fraudsters and existing weaknesses within organisations,’ he says. ‘Legacy systems within organisations, human error, as well as a lack of tools or processes to identify fraud quickly create a cascade of challenges. Added to this are the advanced tools cyberattackers use, including AI-generated deepfakes to create convincing fake identities or bypass security measures.’
The financial losses are just the start of the repercussions companies face. That second paragraph may have been the first time you heard of Arup. Its portfolio includes designing iconic sports venues such as the City of Manchester Stadium, Allianz Arena, Beijing National Stadium and Donbas Arena… But all you’ll remember now is how they got fleeced out of $25 million.
‘The consequences of inaction against fraud have an impact on the immediate victims as well as the broader economy. Overall trust in organisations and public institutions is a critical ingredient in the functioning of a healthy society. The stakes are simply too high to ignore,’ says Kumandan.
And while AI (including generative AI, or GenAI) is a potent weapon of attack, it’s also proving to be a useful defence against the dark arts. The US Treasury acknowledges using the technology to detect fraud – and in the 2023 fiscal alone its AI-powered fraud detection recovered $375 million.
Treasury’s records show that cheque fraud has increased by 385% since the pandemic, which poses a significant problem for the organisation, which dispersed 1.4 billion payments totalling $6.9 trillion in 2023 alone. ‘We are using the latest technological advances to enhance our fraud-detection process, and AI has allowed us to expedite the detection of fraud and recovery of tax dollars,’ according to deputy Treasury secretary Wally Adeyemo.
Georgina Caldow, head of marketing, communications and business development at law firm CMS, agrees. She says that while financial fraud remains a widespread issue in SA, AI is especially useful in identifying suspicious spending patterns and irregular payments that can indicate fraud.
‘A major advantage of AI is its ability to scale effortlessly,’ she says. ‘For South African financial institutions and businesses, the manual methods of detecting fraud struggle to keep up with the increasing volumes of transactions. AI systems, on the other hand, can scale up seamlessly, handling large numbers of transactions without needing additional human resources.’
This is particularly useful in SA’s increasingly digitised banking sector, says Caldow.
‘AI also significantly reduces false positives in instances where legitimate transactions are incorrectly flagged as fraudulent,’ she says. ‘By improving the accuracy of fraud detection, AI helps financial institutions avoid unnecessary customer frustration and enhances trust in their services.’
Human fraud detectors don’t stand a chance. AI is simply quicker at processing large volumes of data, learning from it, and using it to solve problems and predict future actions. Marc Evans, founder of consultancy firm Fraud Hero, emphasised the importance of this in a recent interview with Thomson Reuters. ‘You can’t arrest your way out of fraud,’ he said. ‘You have to investigate your way out of it. You need pre-emptive data analytics so you are prepared to see where the money is going and who the bad actors are.’
It’s no surprise, then, that 83% of the anti-fraud professionals surveyed in the Association of Certified Fraud Examiners (ACFE) and SAS’ latest 2024 Anti-Fraud Technology Benchmarking Report said they anticipated adding AI to their anti-fraud arsenal within the next two years.
‘Fraud and financial crime activity are rising, as is its impact in local and global markets due to an evolving economic environment and expanding technological capabilities that create new opportunities for fraudsters,’ says Itumeleng Nomlomo, senior business manager at SAS in SA.
‘Organisations will need to be hyper-vigilant and increase their fraud-fighting efforts by deepening their investment in talent, as well as advanced analytics and AI-enabled solutions that offer innovative strengths in effectively detecting, preventing and mitigating fraud. This approach ensures that they can maintain resiliency, protect consumers and gain a competitive advantage,’ she says.
In the same study, nearly one-third of surveyed organisations that were not currently using AI and machine learning said they expected to add it to their anti-fraud efforts shortly. As the report notes, by 2026, half of all organisations expected to use AI and machine learning as part of their fraud analytics initiatives. ‘Furthermore, the expected adoption rate of AI and machine learning has increased since our prior study, which shows a growing momentum around these tools; in 2022, 26% of organisations expected to adopt this technology over the next two years, while 32% of organisations in our current study are planning to implement AI and machine learning in the near future. The use of predictive analytics and modelling is also expected to rise notably, with 22% of organisations planning to adopt this technology over the next two years.’
But as one digs deeper into the report’s findings, a picture emerges of an anti-fraud industry that is both excited about the possibilities offered by generative AI and wary of the technology’s limitations.
‘Data is only as good as what is put in,’ said one anonymous survey respondent. ‘Many times, fraud is related to what is not put into the system.’
Another respondent expressed interest in generative AI’s ability to identify anomalies, trends and indications in larger volumes of data with minimal resource concerns. ‘However, the organisation will need to ensure that proper guidelines are in place to minimise errors and bias,’ they said.
A third respondent summed up the general feeling. ‘Accuracy, in my opinion, is the biggest challenge for generative AI, as investigators will find it difficult to trust or deploy an inaccurate technology. This is because investigation should be an exact science.’
Rufaida Hamilton, Standard Bank’s head of payments in SA, shares those concerns. ‘The sheer pace at which AI-powered tools can consume and interrogate data sets, analyse patterns, and yields outputs enables tasks that previously required significant human effort to be completed with minimal resources,’ she says.
‘This allows teams to focus on complex reasoning assessments that traditional AI cannot do and client interactions, enhancing the overall risk-management process.
‘However, AI is not a silver bullet,’ Hamilton warns. ‘Effective fraud prevention requires a multi-pronged strategy, combining AI with customer education, strong compliance frameworks and collaboration with regulators. One of the key roles AI can play in this multi-layered approach is in enhancing compliance and reporting. By automating transaction monitoring, AI can ensure faster and more accurate reporting to regulatory bodies.’
The good news for organisations is that GenAI is readily accessible and – despite a few early-adoption teething issues – deeply effective. However that’s also the bad news. Fraudsters and cybercriminals have access to the same tools.
And as ACFE president John Gill concludes in the Anti-Fraud Technology Benchmarking report, ‘it’s an uphill battle when you consider that, unlike the fraudsters, organisations face the added challenge of having to use these technologies ethically’.