BidenCash Shop
Rescator cvv and dump shop
adv ex on 22 February 2024
Yale lodge shop
UniCvv
Carding.pw carding forum

Gold Max

TRUSTED VERIFIED SELLER
Staff member
The main issues and trends in regard to the use of AI in cybersecurity were discussed by Robert Hannigan, senior executive of BlueVoyant and chair the LORCA advisory board, speaking during the LORCA Live online event.

Hannigan began by explaining that AI is often confused with automation, and that the two need to be distinguished. He defined AI as “machines that act intelligently on data” and “it’s not just about doing things at greater scale and faster and more efficiently, it’s something more than automation.”

It is for this reason that the former director of GCHQ does not believe we should be overly concerned with the often talked about scenario of cyber-criminals utilizing AI to launch attacks. “I’ve seen virtually no evidence for this at all,” he said, adding that while malicious actors are increasingly using automative tools at large scale, such as vulnerability scanning, these “are not what I would call AI.”

The one area in which cyber-criminals are leveraging AI is in social engineering attacks, according to Hannigan. Examples include pharming social media accounts at scale and using deep fake recordings: “But that’s really about AI-enabled fraud,” he noted.

In regard to the current use of AI in cyber-defense, again many of the techniques actually fall into the bracket of automation. Anomaly detection and behavioral analytics – learning what’s normal in a network from pattern analysis and finding exceptions – is the area where AI is starting to take off. However, “we have to be realistic about the fact this is not a silver bullet yet,” commented Hannigan. He explained it is all too common to run into problems with the two components of AI: data and models. “Clearly, if you don’t have enough data to work on, or if your model isn’t quite right, you’re going to either flood your customer with false positives, or you’re going to miss critical threats.”

Therefore, in the view of Hannigan, while behavioral analytics offers huge potential, it is still very much a work in progress.

Finally, Hannigan discussed the important concern of security within AI. This particularly relates to more complex AI technologies being developed in areas like medical diagnostics. He noted there has been lots of research into the very real possibility of ‘data poisoning,’ in which machines can be tricked into incorrectly categorizing data, “with potentially very serious consequences.”

Concluding, Hannigan said that these kinds of issues should not put us off from pursuing AI solutions to enhance cybersecurity, “but it is something we need to spend a lot more time on.”
 
Top