You used to build a wall to keep them out, but now hackers are destroying you from the inside
Some time in 2017, a casino in North America hired Darktrace, a British cybersecurity company, about a data leak it was experiencing. Most cybersecurity firms promise to block outside attackers from penetrating your organisation, but Darktrace, founded by former MI5 operatives and Cambridge mathematicians, deploys a subtler approach. It uses machine learning to get to know a company from the inside – well enough that it can spot any deviation from the normal patterns of daily work. When the system spots something suspicious, it alerts the company.
Darktrace usually tells its customers not to expect much useful information in the first week or so, when its algorithms are busy learning. In this instance, though, the system almost immediately reported something odd: data leaking out of a fish tank. The casino had recently installed the aquarium as an attraction for guests. It had electronic sensors that communicated with the tank-servicing company, so that if the water dropped below a certain temperature, someone would be dispatched to fix the problem. Darktrace noted that more than 10GB of data had been transferred, via the tank, to an external device with which no other company device had communicated. An attacker in Finland had found an entry point to a supposedly well-protected citadel.
Strange as it is, the fish-tank story is reassuringly familiar. We are used to the idea that there are bad actors out there attempting to hack into companies and governments. But in fact, many of the threats that Darktrace uncovers are perpetrated by trusted people inside an organisation. “The incidents that make my jaw drop, the really audacious ones, tend to involve employees,” says Dave Palmer, co-founder and director of technology at Darktrace. He told me about the case of an Italian bank which discovered, after installing Darktrace’s system, that computers in its data centre were engaged in unusual activity. Eventually, they were found underneath the false flooring (data centres have raised floors to allow for air circulation). Members of the bank’s IT team had been siphoning off new computers, hiding them and using them to mine bitcoin. That wasn’t the only incident: at another company, an executive had set up a porn site, complete with billing system, from his office PC. In another case, a senior employee at a retailer was sending customer credit-card details to a site on the dark web.
The notion of the insider threat has become a hot topic among those whose job it is to protect organisations against digital crime. “If I was a chief security officer, my own employees would be what keeps me up at night,” says Justin Fier, Darktrace’s director for cyber intelligence and analytics. Cybersecurity firms are turning their gaze away from the horizon and back to the citadel itself. This represents a huge shift in the way managers think about the integrity of their organisations. Employees need to get used to the idea that they may be one false move away from being deemed human malware.
In 1988, Robert Morris, a graduate student at Cornell University in New York, set out to gauge the size of the internet by writing a program capable of burrowing into different networks. The worm he released from the Massachusetts Institute of Technology (MIT) servers had an impact he did not anticipate. It spread aggressively and rapidly, leaving copies of itself on host computers and overloading systems. Unwittingly, Morris had created a worm that crashed much of the then-nascent internet, impacting hundreds of businesses. In 1989, Morris was indicted under the newly minted Computer Fraud and Abuse Act (he was later appointed an assistant professor at MIT).
The Morris Worm, as it became known, was a prototype computer virus: code capable of spreading from host to host, replicating itself. It was also the first well-known instance of what became known as a denial-of-service attack, in which the perpetrator, instead of trying to steal data, seeks to make a system impossible to use. The cyber-security industry was formed in response to the Morris Worm and other nuisance attacks. As companies became more reliant on computers for their day-to-day operations, jobs were created for experts who could stop viruses from entering their networks. The industry’s mantra in those early days was “Prevention is better than cure”.
Today, it is becoming accepted that there is no such thing as prevention. “Organisations are going to get breached. It’s a question of when, not if,” says Fier. Based in Washington DC, he joined Darktrace three years ago, after working for US intelligence agencies on counterterrorism. “Networks have highly porous perimeters. A skilled adversary will always find a way in..
Conventionally, security software programs have acted as gatekeepers patrolling the company’s perimeter. They man the gates, scanning for features that fit the descriptions they’ve been given of known digital attackers. But today, malware is much easier to create and distribute. Viruses move faster and act more intelligently. Their creators give them baffling, frequently changing disguises which confuse software designed to recognise known threats.
Organisations are also vulnerable at many more points. The internet of things is rapidly expanding what security experts call “the attack surface”. Intruders can now enter an organisation through a vending machine, a smart thermostat or a TV, not to mention one of the many connected devices that employees carry or wear every day. The gatekeepers, outwitted and overrun, have responded like authoritarian leaders attempting to clamp down on crime, introducing increasingly draconian security policies. But when employees subsequently find it harder to work, innovate and experiment, the business suffers.
The human brain has two ways of coping with risk. The first is to spot a threat and instigate the appropriate action. Psychologist Daniel Gilbert describes the brain as “a beautifully engineered get-out-of-the-way machine that constantly scans the environment for things out of whose way it should right now get.” A primate on the savannah knows lions are a threat to her safety. When she sees one, a feeling of fear drives her to run or hide. This is our most ancient form of risk-management.
A second capacity to avoid possible dangers was developed much later in human evolution: the ability to anticipate and pre-empt. Hence helmets, insurance and antivirus software. This second approach means we can arrange our lives in such a way as to reduce exposure to threats. But it comes with downsides. For one thing, it hampers our freedom. If you know of a spot in the jungle where lions are frequent visitors, you don’t go there, even if there might be something wonderful to see or eat in that locale. For another, it relies on you having a fair idea of what future dangers might be. After imposing constraints on your jungle-roaming and investing in lion-protective body armour, you end up dying from a snakebite to the foot.