2021. An employee working in a startup in China, SwarmEe, that specialises in making drone swarms that can be used for policing and security, receives an interesting email on her phone. It is a book signing at a nearby bookshop by one of her favourite authors – one she’d been dreaming of meeting for a long time. Clicking on the email on her personal phone, unknown to her, a malicious script switches on her Bluetooth and begins communicating with the air-gapped terminal she is using to work on the drone’s firmware.
A year before, the engineering team building the drone’s terminal control software was the subject of a social engineering attack that added an additional backdoor port that went undetected. Using that backdoor, the malware in the phone now pushes a little script into the drone software.
Two years later, one of the top governments in the world is using a SwarmEe drone swarm as part of its security screening process during a big event: with a camera feed from all the drones and an AI system in the backend constantly looking for threats in the crowds of a big speech from the country’s prime minister. Unknown, however, the drone swarm is relaying key information to an unauthorised location.
As the prime minister’s speech begins, a new drone looking exactly like the ones in the swarm joins it. The malicious script that was pushed nearly two years earlier recognises the new drone’s ID as a member of the swarm thus preventing it from being flagged to the security team. This new drone begins to make its way closer to the prime minister’s dais. Footage from other drones would later reveal the rogue drone reaching very close to the leader’s location and then plunging down once it identified the target using its cameras. A proximity sensor triggered a hidden explosive device resulting in injuries. Only a last-minute response from security ends up preventing an assassination.
Later investigation would not reveal much. The stray drone was bought online and delivered to a locker. It was modded to look exactly like the swarm. SwarmEe would discover the vulnerability and immediate push a patch to the hundreds of thousands of its drones around the world while recalling many others.
…..
The above scenario is not science fiction. It could be a very near future. In 2017, nearly a billion people were impacted by cybercrime and collectively lost $172 billion. Malware, that can lock out systems, bring down instruments, and cause malicious breakdowns are one of modern day’s biggest threats and they are getting more dangerous.
In a blackhat conference this year, IBM demonstrated Deeplocker, one such AI-powered malware that could arrive in the real world. Deeplocker had two powerful features that could make malware infinitely more dangerous: Obfuscation and Targeting.
Program, are you good or bad?
Deeplocker hides in an innocuous application, like a video file, that may reside in a computer with little or no detection. However, what makes it dangerous is its ability to hide its ‘trigger condition’ (when it would unleash its true malicious intent) from detection by using a deep neural network. The trigger could be anything from someone’s face or voice to a certain location or reaching certain network conditions. The problem is that it would be nearly impossible to detect the trigger conditions (and thus detect the malware) as it lies hidden under a deep learning neural net.
This goes to the heart of the problem with deep neural AI: that deep learning is a black box. Malware and detection programs play a game of hide and seek. Malware writers go to great lengths to obfuscate the code into looking like a harmless piece of functioning software. Detection programs are constantly updating the patterns and rules that can identify when a piece of code or behaviour is malicious. However, with deep learning based code, it gets extremely difficult to analyse the intent of a program by just looking at the code.
Going undetected is invaluable. The Stuxnet virus attack which brought down Iran’s uranium enrichment plant went undetected for months and caused real world damage with perhaps world-changing significance. The malware was infiltrated through industrial and engineering systems. It spread through multiple companies and computer systems and through USB drives all the while evading malware scanners and security.
However, with AI-based malware we could now be seeing the commercial emergence of virus and malware that spreads undetected across millions of computers around the world for long periods of time. This becomes even more problematic when combined with the second dangerous feature: targeting. Unlike “spray and pray” malware that relied on volumes to have payoffs, targeted malware could lie silent and waiting until the right target presents itself to unleash havoc.
Malware for one, please
We are making exponential advances in AI’s ability to detect and understand visual images, recognise people, understand sounds and even mimic people. The AI-led features Google is bringing to its phones for millions of consumers to use: a rapidly improving visual recognition system where the camera becomes the digital eye, ability to professionally manipulate images, ability to identify voices and parse them for meaning, and finally the eerie capability of Duplex to even simulate the human side of a conversation. Although these capabilities are likely to perform only in a targeted environment they are advancing rapidly.
A ransomware like ‘Wannacry’ that brought down many systems around the world could now get more targeted and thus more dangerous. The virus spreads across multiple systems all over the world, until, for example, maybe months or even years later, it recognises the face that’s coming up through the webcam or the voice that’s filtering the mic to trigger its intent.
Targeting could get more insidious. For instance, a white power malware could only target those of colour. Or, a religious malware could target a certain religion or caste. As AI gets increasingly better at being able to tell difference basis patterns, sounds, visuals and symbols, the targeting can also get increasingly sophisticated and affect social and political orders.
As we move to a more connected and diverse world of intelligent systems like autonomous vehicles and drones (SoftBank believes that there will be 1 trillion connected devices by 2025), the scope and sophistication of the ability to target improved exponentially. Drones with powerful cameras that can recognise targets will start to move from the realm of powerful militaries to being available commercially with a little tweaking. Sensors that can see, hear or detect our behaviours and health could become common and pervasive – all sources of data that when triangulated would be a powerful weapon.
Self-driving vehicles (at level 4 or level 5 autonomy requiring no human intervention) will have sophisticated detection systems including radar, imagery and audio capabilities that can capture and relay extremely valuable information. Malware that can infect a self-driving car system to target something or someone could be a powerful weapon.
Socially engineering vulnerabilities
If you’ve ever used the Lyrebird, you know how fascinating and creepy it is. It takes recorded voices of anyone and then learns how to ‘sound’ like that person. Feed it a snippet of text and it will read it eerily similar to what the person it was trained to sounds like. What used to be a systems program attacking files or changing settings could now be an automated voice call that sounds just like your boss who communicates decisions that compromise the company or a project.
These attacks used to be inefficient and expensive as they took up time and resources. Now, malware and phishing systems could penetrate thousands of computers, scour through all the data available and collect more if required (through sophisticated social engineering attacks) to build a complex personal profile and plot out the best move for each individual.
A malware attacker wanting to get into a highly secure company system could start from the social media data of employees to identify targets who work in a certain building or location using a combination of information available as text, images, location tagging, etc. that lie exposed to the public eye. Today, we have tools that can automate this entire process enabling attackers to operate at scale. Once specific individuals are identified, the second level of targeting will be personalised, through social media friend requests or communities of interest. Innocuous looking photos, brochures and other ‘highly’ relevant information for the target could contain malicious code which enters a phone or laptop. This opens up new information, including conversations when inside the building or images of the interiors and entry process being captured.
These attacks could be run completely automated with little to no intervention from individuals. And human systems and processes aren’t fast enough to detect or even respond to these threats.
Machines fighting each other
This leads us to the other harrowing conclusion: If machine learning is going to make malware powerful, we have to rely on machine learning to make our defence powerful as well. Deep learning can now help systems ‘learn’ what’s normal in a network in order to identify or detect any abnormal behaviour even if there are no existing malware patterns that fit with the behaviour.
Currently, the biggest value of employing machine learning to fight malware attacks is the ability to filter out and present the really dangerous or malicious event from the deluge of security violation events that come to the eyes of a human security analyst. In that sense, it adds and improves the job of a human security expert. However, with time and more pervasive use, one could expect these systems to automate monitoring and response to a large number of threats.
The trouble is that we have the same problems with an AI defender as we have with the malware. While the functional intent is to detect and weed out any malware, the obfuscation of intent at a code level applies here too. What’s to say that we won’t have an AI defender that we can no longer predict or control. What would it treat as malware?
More importantly, what happens when the obfuscation is so deep that we are nearly fully dependent on the AI malware defence and its strength to keep our basic infrastructure operating. There is no ‘off’ switch for this unless you want to grind the world to a halt.
Malicious malware or Fascist state?
The rise of AI in the ability to construct malware will likely create the perfect petri dish for the concentration of power and greater monitoring and control.
Using powerful AI to defend networks would be the perfect excuse to target and monitor large amounts of data and tracking individuals at a scale never done before. Startups in China like Megvii and SenseTime are building powerful facial recognition capabilities that are being used by the government. Megvii’s Face++ technology uses images from nearly 170 million CCTV cameras and sophisticated AI to create a surveillance state for the government. US immigration department is considering the use of facial recognition technology, including Amazon’s Rekognition to monitor immigrants.
A national ID, linked biometrically and trackable everywhere, could become mandatory for every online interaction or transaction. The pitch would be security, ensuring interactions are legitimate and that we are not succumbing to a phishing attack. Companies would want to monitor every single email and perhaps take control of personal devices to ensure that no malicious apps or malware uses them to get control of their systems.
This places the power in the hands of governments and corporations with increasingly centralised control to track every individual. The danger is that we may be more than willing to go along. A recent Norton cybersecurity report revealed that a staggering 81% of people trust their governments when it comes to keeping their data safe and protected. In a world which is increasingly complex and dangers supposedly lurk at every corner, we may end up giving up our liberties for safety and authoritarian control. And therein lies a bigger danger.
Subscribe to FactorDaily
Our daily brief keeps thousands of readers ahead of the curve. More signals, less noise.
To get more stories like this on email, click here and subscribe to our daily brief.
Disclosure: FactorDaily is owned by SourceCode Media, which counts Accel Partners, Blume Ventures, Vijay Shekhar Sharma, Jay Vijayan and Girish Mathrubootham among its investors. Accel Partners and Blume Ventures are venture capital firms with investments in several companies. Vijay Shekhar Sharma is the founder of Paytm. Jay Vijayan and Girish Mathrubootham are entrepreneurs and angel investors. None of FactorDaily’s investors has any influence on its reporting about India’s technology and startup ecosystem.