THANK YOU FOR SUBSCRIBING
Cybersecurity and AI
Dr Kevin Jones, Global Chief Information Security Officer, Airbus
Cybersecurity has long been a battleground between attackers and defenders, with both sides locked in a spiral of increasing complexity; as quickly as enterprise IT departments are able to develop systems, procedures, and patches to combat the latest malware, the threat also grows accordingly, mutating into something new to prompt further responses and counter-responses as each side attempts to regain the upper hand.
However, at a time when businesses are increasingly prioritizing their own digital transformation activities, this battleground is taking on added significance as the increasing technical complexity of companies provide both an opportunity for hackers and an evolving threat for their CISO counterparts.
As a result, both sides are looking for areas that give them an edge with technologies such as machine learning, automation, and artificial intelligence being used to both bolster defences and upgrade offensive capabilities.
Cyber-attacks in the UK increased by 140 percent in 2018. Against this backdrop of an increasing cyber threat level, it is unsurprising to note that in a recent survey, 56 percent of cyber security analysts said they were “overwhelmed” by the cyber threats while almost a quarter said they are not able to successfully investigate all identified cyber incidents.
For those tasked with defending a company’s networks and systems, speed kills-the longer it takes them to identify and respond to an attack, the higher any damage they incur is likely to happen.
Artificial intelligence allows corporate to automate processes that would previously have been undertaken by humans, saving time, and in most instances, money while offering a more impregnable perimeter to repel attacks
In the cyber security AI arms race, companies that do not embrace automation and orchestration will find that their goal of defending IT infrastructures becomes even more insurmountable. Therefore, the attractions of being able to utilize these new tools to develop and deploy defensive countermeasures quickly are apparent.
Artificial intelligence allows corporate to automate processes that would previously have been undertaken by humans, saving time, and in most instances, money while offering a more impregnable perimeter to repel attacks.
Conversely, for hackers, the developments in AI and automation mean that they can generate malware at a pace that was hitherto unachievable. Furthermore, the adoption of AI to respond to learned environments and actions with malicious intent, or trigger social engineering attacks is already on the horizon.
Unsurprisingly, developing the next wave of artificial intelligence technologies is a big business-DARPA, the agency at the heart of the US DoD’s approach to military technology, recently announced a $2bn multi-year investment plan to explore new theories and applications that could make it possible for machines to adapt to changing situations. Where the military leads, corporates inevitably follow and the continued evolution of artificial intelligence will undoubtedly change the cyber security landscape.
The implementation of such technologies, however, is found to be significantly more effective when AI models and methods are developed in-house on representative data and environments. For example, Airbus cyber innovation has recently developed a method for malware detection using machine learning that has proved to be 98 percent effective in laboratory conditions and is already supporting decision making of Security Operations Centre (SOC) analysts. However, as these technologies develop, it is not inconceivable that AI in itself could become the battleground-if defenders are starting to use more automation to protect their digital assets, it makes sense that hackers begin start to actively target the AI systems that are being tasked with keeping them out. Indeed, such research into an “adversarial AI” is already a reality now. This opens up a whole new front in the war for cyber supremacy centred on securing the very integrity of the data, tools, and models that are utilized to guard against threats. To cite an example, imagine if hackers could evolve in their approach to evade detection by implementing back doors in the very data and code used in the AI that is driving future systems.
When it comes to cyber security today, the AI arms race is only beginning.