Artificial Intelligence may be way out to counter cyber threats 

Cyber threats can become life threatening when we would have driverless cars or factories run by robots. A hacker can break into the systems and hold us to ransom, steal data or simply rob banks

NH Photo
NH Photo
user

Abhijit Roy

Nearly 100 countries and over two lakh computers have been crippled by a malware attack over the last one week. A malware tracking map showed ‘WannaCry’, a ransomware, infections were widespread. Britain cancelled or delayed treatments for thousands of patients. Train systems were hit in Germany and Russia, and phone companies in Madrid and Moscow. Renault’s futuristic assembly line in Slovenia, where rows of robots weld car bodies together, was stopped cold. In Brazil, the social security system had to disconnect its computers and cancel public access. Global corporations like FedEx, Nissan and several others have been hit. No one knows the extent of the financial losses as of now. Almost 100 systems in India has been affected by WannaCry attack which has shown how the most powerful systems are helpless before hackers.


In the first six months of 2016, over 180 Indian companies were victims of “ransomware”—or online extortion schemes. Ransomware, also called Business Email Compromise (BEC), globally caused companies a loss of a whopping $3 billion last year, according to some reports. BEC schemes are scam tactics which compromise business accounts in order to facilitate an unauthorised fund transfer and is considered one of the most dangerous threats to organisations.


According to Trend Micro Incorporated, a global leader in security software and solutions, 2016 has proven to be a year of online extortion through various malicious attacks.

Security experts will tell you that humans are the weakest link in the cyber security world. The facts speak for themselves, over 90% of security incidents are due to human error

As economies and businesses transform to a digital era, the more vulnerable one would be from cyber threats from malware, virus, ransomware, etc. This can become life threatening when we would have driverless cars on our roads or factories run by robots. A hacker can break into the systems that run these machines and can wreak havoc or hold us to ransom, steal data or simply rob our banks like they did with Bangladesh’s central bank which was robbed of almost $100 million a few months ago. But the good news amidst the calamitous predictions is that the same technologies can also save us from such apocalyptic scenarios.


Security experts will tell you that humans are the weakest link in the cyber security world. The facts speak for themselves, over 90% of security incidents are due to human error. AI systems can learn a user’s behavioral patterns, and in the future may potentially intuit answers to email correspondence and offer a draft of an answer to the user.


Something similar is happening in the cyber security field in the search for hackers’ attacks. This is how it works — security companies are applying smart prediction to find hacker patterns and identify tactics used to conduct attacks. Based on this, AI ‘learns’ the behavior and can predict what kind of attack there might be next time. There might not even be a hacker attack or real information on one, but smart AI has already assessed hacker patterns.


Behavioural profiling is increasingly recognised as a new level of protection against cyber-attacks and systems abuse, offering the potential to pick out new and unknown attacks, or to spot activities that may be missed. The basic premise is to establish a sense of how the system and its users behave, and provide a basis to protect against compromise by watching out for unwanted activities.


The fundamental value of profiling is that while we may not know who the attackers and cybercriminals are, we know what they’re likely to be doing. Similarly, we ought to be able to develop a picture of what our legitimate users should normally be doing, and then pick out things that appear unusual. While we cannot monitor and inspect everything manually, automating the process enables the system to keep a watch on itself. Developing an understanding of behaviour is not a new idea in security terms, and it already has uses in a variety of related contexts. For example, behavourial monitoring of some form is a long-standing technique in the context of Intrusion Detection Systems (IDS).


Similarly, a link can be drawn to the use of investigative analysis in malware detection where unknown code is assessed to determine if it performs malware-like actions when executed (that is, essentially looking to see whether it behaves in ways that have been established by profiling previous known malware examples). In addition, it doesn’t just have a role to play in combatting external attackers. Profiling can offer a means to identify insider threats, such as fraudulent behaviour and other misuse of privileges.


The IDS context provides a good example of the contrast between profiling ‘normal’ activity versus spotting the signs of known bad behaviour (termed anomaly-based and misuse-based detection). Both essentially rely upon monitoring current activity to spot potential attacks, but they approach the task in different ways. With misuse-based detection, the attacker behaviour has essentially been profiled in advance and then codified as signatures that attempt to describe attacks, misuse and other unwanted activity. If we can spot an impostor then it’s a frontline defence against unauthorised access.


Meanwhile, anomaly detection, attempts to characterise normal behaviour, and then flags significant departures on the basis that they may denote something bad, and are worthy of further examination. The latter option is more explicitly linked to building a profile of behaviour, and is also referred to as behaviour-based detection.


Of course, the idea that they are being monitored may not be entirely palatable if applied to our own users. However, behavioural monitoring is arguably an extension of the type of manager-level observation of staff that is regularly advocated in standard security guidance. The difference is the automation that allows it to scale up, and enables the profiles to be identified in the first place. At the implementation level, this can be achieved via machine learning and other artificial intelligence and statistical techniques for data analysis. This, in turn, enables pattern identification and classification, often enabling characteristics to be profiled that would be too subtle for human observation to identify.


If we can spot an impostor then it’s a frontline defence against unauthorised access and we can trigger the defence mechanism.


Abhijit Roy writes on technology issues. He is based out of Kolkata

Follow us on: Facebook, Twitter, Google News, Instagram 

Join our official telegram channel (@nationalherald) and stay updated with the latest headlines