Does AI threaten our digital security? Interview with cybersecurity expert Eddy Willems (G DATA)


Eddy Willems is a cybersecurity expert and security evangelist at G DATA. We spoke to him about our digital security and AI.

Some time ago we were talking about the dangers and possibilities of artificial intelligence (AI). You then mentioned that systems like ChatGPT can very easily create texts or even websites to use in phishing campaigns. What about six months later? Is that risk still there, or is everything not so bad now?

No, not. I actually think the problem is still there. You shouldn’t ask “write me a phishing email” because of course he won’t do that, but you can certainly ask for such a text to be corrected. And if he does indeed improve the text, then you will see that the system is delivering something. You can also request translations from ChatGPT or Bard or whatever. Then of course we’re over LLM’s, large language models busy. You see that they are used more and more. The threat has therefore not gone away and is still real. We also see improvements in a lot of things. We don’t have exact figures, but you see more and more generative AI appearing in phishing emails and spam emails.

Now, there are a lot of other threats from AI. For example, we have prompt injectionattacks seen. That’s very similar sequel injection: you give incorrect data to a database on the internet in order to get different data from it. Prompt injection is more or less the same. Imagine: on the one hand you have a website that you surf to, with a lot of data on it. On the other hand, you have Bing Chat, which communicates with the website in your browser. By properly interrogating Bing Chat, you can use prompt injection to transform the AI ​​system into a social engineer, who can at some point request your credit card details and the like. Meanwhile, a barrier was built into Bing Chat to prevent such things, but of course it happened.

So just as users in the early days could ‘jailbreak’ ChatGPT and Bing by providing the right input, could hackers abuse the system?

Yes, that’s basically what it comes down to. I think it is an important development: if you see that this was possible at a certain time, then it means that this will be possible again someday. You can then say that AI is already special at the moment clever is, and you can completely stop that with a number of filters. I have my doubts about that. It’s about the same as programming things: you see that updates are still coming and changes are still being implemented. You cannot completely shield that. I honestly think that if it’s in there now, it will come out again later.

digital security AI
Eddy Willems. © G DATA

Another element is data leakage. There have already been people who have seen data that they had written down in a book being used by Bing Chat or ChatGPT. Why does that happen? Because they are fed with a lot of things. The problem, of course, is that there is a lot of data that these systems are not allowed to access.

It is very difficult to check that and estimate it correctly. A lot of people who try the chat systems also throw in a lot of information from their company. Theoretically, this is all well maintained and it is said that it is handled with care, but is that a guarantee? I do not think so. I fear that the more we give it, the more the systems may end up with data that was not simply intended to be released. Of course, there is always such a fear with something new like ChatGPT.

Generative AI of course goes beyond chat systems like ChatGPT in Bing.

That’s right! Look at deepfakes, for example. This mainly concerns voices, because your voice is quite easy to clone. I had it done to myself: you speak something into something for a minute and your voice is actually copied beautifully. Besides the fact that the AI ​​speaks faster, I think that voice sounds insanely good to me. If you send that over the telephone, you can barely hear the difference.

The data that AI learns from can also be ‘poisoned’. For example, I know a well-known security researcher from Denmark, Peter Kruse, who asked ChatGPT who exactly Peter Kruse was. At some point he receives a response from ChatGPT stating that he has died. How is that possible? The AI ​​had misinterpreted some information, causing the system to believe he had died. I personally think that is a major threat to AI: incorrect information that is misinterpreted.

In other words: AI systems are far from perfect?

That’s right. Many people ask me whether you can bypass AI and its security mechanisms. That possibility is there: if you know the model a little, you can use certain circumvention techniques. You must be very well informed about how everything works. An example: we also use AI in G DATA products. We use machine learning and deep learning there, in DeepRay and Beast.

DeepRay is based on a neural network and makes ‘deep analyses’. He will look to see if there are packers that he can see through, so to speak. Packers are programs that mask malware so that you cannot immediately detect it. Then you also have Beast, a type of behavioral protection that also uses AI techniques with machine learning. Theoretically, if you have those two programs on your device, you no longer need protection. That’s not entirely correct: you can clearly see that you can avoid those things if you know how they work. You can still do that with a lot of those programs. This means that it has not yet been trained enough or that the model is not yet robust enough. It’s a bit like humans: if you know how someone thinks, you can draw that person into a conversation.

AI is now a bit everywhere. Ultimately, we have to take stock: is it bad for our online security and privacy, or is it rather positive?

I think it’s mostly good. These AI systems are becoming increasingly better at anticipating threats and preventing them. The AI ​​components that are in our products, but also in those of competitors, support each other and push each other to a higher level. That wasn’t the case before. We used to do it effectively with signatures and that was unsustainable.

So I think it’s a good thing and we will ultimately come out of it safer. There are a lot of things that are not yet optimal, simply because we are human. Precisely because we can throw that AI on top, we will be able to fill in the piece that we are currently missing. One also influences the other: if one company does something, the other does too. That leads to improvement. I think that is very positive and that AI can help us more than we ever suspected. Moreover, we are only at the beginning: as we see it now, I think it is only a small beginning of what it will really become.