Sure Business professional services consultants Grant Mossman and Malcolm Mason

Sponsored Content

ARTIFICIAL intelligence is revolutionising the world as we know it. AI has silently been underpinning most of our technology since the early 2000s, so we’ve been enjoying the benefits of it for a while now. But, as AI evolves, it is inevitable that cyber criminals will leverage this sophisticated technology to their advantage.

Here, Sure Business professional services consultants Grant Mossman and Malcolm Mason give their opinion on the rapid evolution of AI and whether it poses a threat to cybersecurity.

SHOULD BUSINESSES BE WORRIED ABOUT IMPLEMENTING AI?

Malcolm: They should be worried about not implementing an AI strategy. AI is a natural evolutionary step, and we’ve seen that the people who developed a data strategy have thrived, but others didn’t. We would recommend that for 90% of businesses, a data strategy is vital.

Grant: It depends on whether the AI in question is publicly available or an in-house segregated solution. It matters where it originates from, as this is where your data will be held. For example, ChatGPT is a public location, so any data inputted becomes public domain.

WHAT ARE THE KEY CONCERNS AROUND AI AND CYBERSECURITY?

Grant: That people don’t fully understand the concept of where their data is going. People are keen to utilise this technology without thinking about the implications of putting sensitive data in the wrong place. Large-scale corporations such as Apple and Samsung are even banning the use of ChatGPT, so as not to risk jeopardising sensitive company information. This isn’t to say that staff are not educated, but there is still naivety.

Malcolm: Another big concern is the grey area around PII (personal identifiable information). This can be found in your email address. Think about how often we use our email address to create accounts to identify ourselves, such as with online banking. The rise of AI technology means that the lines between what we should and shouldn’t do are becoming blurred.

CAN AI SOFTWARE BE TAKEN ADVANTAGE OF AND, IF SO, WHAT ARE THE RISKS?

Grant: Yes, it can. Not long after ChatGPT, FraudGPT became prevalent. Cyber criminals are using the model and simply asking it to learn criminal data to create phishing emails and applications.

Malcolm: People have quickly learnt how to leverage the programs and fine tune them to suit their own deviant needs. When people feed in bad data, this gives bad outcomes, and in turn manipulates other outputs. It is becoming very easy to train any program and model to produce fraud.

IN YOUR OPINION, DOES AI SOFTWARE POSE A DATA PROTECTION THREAT AND WHY?

Grant: Governing bodies have said it shouldn’t affect data protection laws, as GDPR is in place. But the real threat comes from not knowing who has uploaded the data. And does the end user realise that this data is being recorded?

Malcolm: All the free office tools we have have AI built in to create content. AI is learning from something and it’s learning from your data. We would advise caution with what the regulations say and always keep in mind whether this aligns with your customer promises. It’s a real grey area as there’s not much software that doesn’t have an AI function now.

WHAT ADVICE WOULD YOU GIVE ORGANISATIONS TO STAY SECURE?

Grant: We can get you Cyber-Essentials-certified, give end-point protection systems and help to prevent breaches before they happen. We also offer regular control testing of the environment, as without this, you don’t know where your weaknesses lie.

Malcolm: Our main priority is giving our customers control of their data, so they know exactly where it is and that it is in safe hands. If you’d like to find out more about how Sure Business can help your organisation to stay cyber safe, get in touch by emailing contactus@sure.com.