Prevention With Imperfect Attribution: A Better Type of Cybersecurity Strategy
The new model shows why countries that retaliate too much for online attacks are making things worse on their own.
During the opening ceremony of the 2018 Winter Olympics, held in PyeongChang, South Korea, Russian hackers launched a cyberattack disrupting television and internet systems at the games. The incident was resolved quickly, but because Russia used North Korea’s IP address for the attack, the source of the disturbance was unclear immediately after the incident.
There are lessons learned from the attack, and other similar attacks, at a time when hostilities between countries are increasingly common online. In contrast to conventional national security thinking, such clashes require a new strategic outlook, according to a new paper co-authored by an MIT professor .
At the heart of the matter involves prevention and retaliation. In conventional warfare, deterrence usually consists of potential counterattacks against the enemy. But in cybersecurity, it’s more complicated. If identifying a cyber attacker is difficult, then replying too quickly or too often, on the basis of limited information such as the location of a particular IP address, can be counterproductive. Indeed, it can e wouldn’t lead other countries to launch their own attacks, by making them think they won’t be blamed.
“If one country becomes more aggressive, the equilibrium response is that all countries will eventually become more aggressive,” said Alexander Wolitzky, an MIT economist who specializes in game theory. “If after every cyberattack, my first instinct is to retaliate against Russia and China, it gives North Korea and Iran the impunity to engage in cyberattacks.”
But Wolitzky and his colleagues think there is a viable new approach, which involves the use of more thoughtful and informed selective retaliation.
“Imperfect attribution makes prevention multilateral,” Wolitzky said. “You have to think about everyone’s incentives together. Focusing your attention on the perpetrators most likely could be a big mistake. ”
The paper, “Deterrence with Imperfect Attribution,” appears in the latest issue of the American Political Science Review. In addition to Wolitzky, the authors are Sandeep Baliga, Professor of Managerial Economics and Decision Sciences at Northwestern University Kellogg School of Management John L. and Helen Kellogg; and Ethan Bueno de Mesquita, Professor Sydney Stein and vice dean of the Harris School of Public Policy at the University of Chicago.
The study is a joint project, in which Baliga adds it to a team of researchers by contacting Wolitzky, whose work applies game theory to a variety of situations, including war, international relations, network behavior, labor relations, and even the adoption of technology.
“In some ways, this is the kind of canonical question for game theorists to think about,” Wolitzky said, noting that the development of game theory as an intellectual field stemmed from the study of nuclear deterrence during the Cold War. “We are interested in what is different about cyber prevention, in contrast to conventional or nuclear deterrence. And of course there are many differences, but one thing we solved quite early on was this attribution problem. ” In his paper, the author notes that, as former U.S. Deputy Secretary of Defense William Lynn once said, “While missiles come with a sender’s address, computer viruses generally do not.”
In some cases, the state is not even aware of massive cyberattacks against them; Iran belatedly realized that it had been attacked by the Stuxnet worm for several years, damaging the centrifuges used in the country’s nuclear weapons program.
In the paper, scientists largely examined scenarios in which countries were aware of cyberattacks against them but had imperfect information about attacks and attackers. After modeling these events extensively, the researchers determined that the multilateral nature of cybersecurity today makes it very different from conventional security. There is a much higher chance in multilateral conditions that retaliation can backfire, resulting in additional attacks from a variety of sources.
“You don’t have to commit to being more aggressive after every signal,” Wolitzky said.
What works, however, is simultaneously improving attack detection and gathering more information about an attacker’s identity, so that a country can determine which other countries they can respond to meaningfully.
But gathering more information to inform strategic decisions is a complicated process, as experts point out. Detecting more attacks while not being able to identify an attacker does not clarify a specific decision, for example. And gathering more information but having “too much certainty in the hook” can bring a country straight back to the problem of denouncing some states, even as other states continue to plan and carry out attacks.
“Optimal doctrine in this case will in some ways make you reply more after the clearest signal, the most unambiguous signal,” Wolitzky said. “If you blindly commit more to retaliation after each attack, you increase the risk you will retaliate after a false alarm.”
Wolitzky points out that paper models can be applied to issues beyond cybersecurity. The problem of stopping pollution could have the same dynamics. If, for example, many companies pollute the river, choosing one company to get a penalty could eprice another company to continue.
However, the authors hope the paper will generate discussion within the foreign policy community, with cyberattacks continuing to be an important source of national security concerns.
“People think the possibility of failing to detect or link cyberattacks is important, but yet [there should] be recognition of the multilateral implications of this,” Wolitzky said. “I think there is an interest in thinking about its application.”