How AI increases your organisation’s fraud risk

18 June 2024 - Tim O'Connor

The benefits and risks of the increasing use of Artificial Intelligence in all parts of life is becoming the source of lively and varied debate. Tim O’Connor, Audit Partner, shares his top tips on how owners of businesses can keep their team alert, aware and protected.

In areas such as education there is now an acceptance that it exists as a tool and there needs to be ways to police but also embrace it. As ever our entrepreneurial Higher Education sector will be leading from the front in this area in both the development and assimilation into our normal lives of AI. Apple is now adopting an interface with ChatGPT to enhance its Siri offering, it is going to be impossible to avoid the growth and development of AI.

However, alongside the benefits of such tools there will always be people looking to utilise them for more nefarious purposes. The rise of cybercrime and its global nature is rightly a concern for all businesses, charities and education providers. There is a genuine (and correct) concern that legitimate business is always one step behind the fraudsters, with patches and fixes for exploits only put in place once the danger is uncovered.

It should be remembered that some cybercrime is just a way of carrying out the good, old-fashioned frauds such as payment diversion, procurement fraud and various others electronically. As the reliance on physical post reduces the fraudsters must make use of the electronic equivalents. The enhanced risk being around the relative lack of security of the electronic over the physical, or at least the ease of targeting and the large store of information within the mailbox.

We have all become used to phishing e-mails, the shame and vilification of clicking on a link we shouldn’t have done. The online training course which acts as the modern-day equivalent of being put in the stocks and having rotten vegetables hurled at you. But there is a reason that organisations take the training and reminders so seriously. Endpoint user attacks are used as the key tool to test system weakness and then move onto more targeted attacks, such as data theft (the recent issue for Ticketmaster) or ransomware and denial of service attacks (such as suffered by a supplier to the NHS).

These endpoint user attacks are being attempted all the time with phishing e-mails being the most common of these. In the Cyber Breach Survey 2024, issued by the Office for Statistics, 84% of businesses included in the survey reported being the subject of phishing attacks. Perhaps more worryingly the other 16% just hadn’t noticed. Of the 84% that had been suffering from phishing attacks, 22% reported that this ended up with them being victims of cybercrime. So, the risk for all organisation continues to be ‘who is the weakest link’ and hence the dummy e-mails, the training and the constant reminders to be alert.

Another risk around e-mail use is the cloning (or spoofing) of e-mails from ‘other’ team members requesting that they need money urgently. These have become increasingly common, and I know I receive fairly regularly requests for funds for both colleagues and clients for emergency funds. Whilst my generosity knows few limits, I am yet to succumb to such a request.

I have run training courses for finance teams on fraud and cyber fraud and one of the key reminders around cloning of e-mails and phishing attacks generally is that they are fairly blunt tools. The source e-mail is not the one displayed (hover the cursor over the address to see where is has really come from) and the language used is inconsistent with the individual or contains basic mistakes. These made it relatively easy to debunk such e-mails, as long as you think sceptically about them in the first place.

This is where AI and deep fake technology is starting to make these types of attack more dangerous. AI is now being used to learn writing styles, to make requests appear to be in the correct ‘voice’ and therefore more believable. It may make finding the source address harder and therefore taking one of the tools for discovery away.  When phishing attacks have been successful the machine learning will mean that lessons will be learnt, and the more successful techniques rolled out more widely and more quickly. We will continue to be increasingly reactive to these challenges, risk increasing and mitigations becoming more difficult to find. More attacks will be successful before the fixes and patches can be applied.

So, as owners and managers of organisations we have to keep ensuring that we invest in our cyber security, keep training and reminding our people and ensure that the basics are never forgotten. It is important to remain sceptical of electronic communication, always check requests outside of the e-mail communication flow (ring from a number not included on the mail footers etc.), keep up to date and share knowledge, always be suspicious of attachments and whatever you do, don’t click on the links!

As part of their annual audit process, your external auditor will review and assess the design and implementation of identified controls in your business. This will include consideration of areas most susceptible to fraud, reporting back to you on their findings.  If you are interested in discussing this further, please email hello@scruttonbland.co.uk or call 0330 058 6559.

Related news

Get in touch for forward-thinking, impartial advice

With offices in Bury St Edmunds, Colchester and Ipswich, we’re close enough for personal meetings with clients from anywhere across the East of England. Got something on your mind? We’ll be happy to listen and give you our thoughts.

Call us on 0330 058 6559
Email us at hello@scruttonbland.co.uk

Get in touch