So-called cyberwarfare has blurred the boundaries of what war is, raising profound questions about how the U.S. should respond to attacks that occur online and in information networks. This was obvious in the hacking of the Clinton campaign during the 2016 presidential election, which was then magnified by U.S. media attention. Still, the U.S. has yet to determine what happened or how to respond. According to Peter W. Singer, a strategist at the nonpartisan think tank New America and the author of several books on cyber conflict, the U.S. needs to act now, while the phenomenon is in its infancy, to establish norms of what is allowed and what is not. The one thing that’s clear about this new category of conflict is that it will not go away soon.
On March 1, 2017, Singer testified in front of the House Armed Services Committee on what steps the country should take to prepare to take on decades of cyber mischief and worse.
Q: In your testimony to Congress on March 1 you said that the war now involving cyber is the inverse of the Cold War. Can you talk about what this means—the inverse of the Cold War—and what its implications are?
A: The argument that I was making was that there are apt parallels with the Cold War and also fundamental differences. An apt parallel would be that, contrary to the visions of cyberwar in movies and in D.C. policy circles of power grids going down in a fiery “cyber Pearl Harbor,” what we are seeing is a competition more akin to the Cold War’s pre-digital battles, where you saw a cross between influence and subversion operations with espionage. That’s particularly true with what Russia has been up to.
This means we need to take new approaches to deterrence, reflecting the dual goal of both preventing an ongoing conflict from escalating, but also responding and better defending ourselves.
Yet, there are also fundamental differences with the Cold War, particularly in how we establish what the norms are. If you go back to the Cold War, everyone was concerned with just one kind of attack, a nuclear one. It was very clear whether it happened or not, and it was very clear who would be behind it. In contrast, now we have multiple different types of cyberattacks, with goals ranging from stealing information to blocking information to changing information. The attribution problem is fundamentally different, not just who did the attack, but even your awareness that you are under attack. There is not a clear cloud of smoke coming after the missiles launch, so we often don’t even know when a cyberattack has occurred.
But also some of the normative questions are different. In the Cold War we were weirdly okay with the Soviets targeting everything from a missile base to a city, but it was known they couldn’t actually cross the line and conduct the attack. If you hit anything, war is on. In contrast, in cyber conflict, we’re not going to stop all types of cyberattacks, but there are some types of targets that we all have to agree to keep off the table. So stealing information from each other is something that states have always done, throughout history, and now they’re just doing it through digital means. So it’s okay if you steal information, weirdly enough, from one of our government agencies. We may not like it, but that’s within the rules of the game. So for example, the OPM (Office of Personnel Management) breach—which reportedly originated in China—is a not story of “shame on them” but “shame on us” for not securing the information better.
On the other hand, we may say stealing information from a private business violates the rules of international trade. So if you’re stealing a design from a car company and then copying and building that car, that’s a violation of the rules of that game.
Or, a new kind of norm might be that there are some types of targets that are off the table because they’re too clearly escalatory, or too prone to confusion and mistakes. So don’t go monkeying around with nuclear power plant control systems. The consequences of us finding you, or you making a mistake, are too great. That target should be off the table completely.
Russia is far more successful than they ever dreamed would be possible back in the Cold War.
Q: What you’re describing is a free for all. These norms you’re talking about are counterintuitive and weird.
A: They may have once been weird, but they are now the new normal. If you go back in time to the Cold War, the very idea of Russia directly influencing the U.S. political system and that then a substantial portion of the government, including the president himself, would just shrug it off? They’d have said it sounds like the plot of that movie starring Frank Sinatra, The Manchurian Candidate. That’s absurd! But yet, that is exactly what is happening now.
Look, we’re down the rabbit hole in a lot of different ways. But again, there are fundamental lessons we should be following. As an example, if you want to establish deterrence, if you want to build norms, then clearly what threatens to undermine your position is inaction when an adversary clearly violates those norms.
Q: The timeline for reaction during the Cold War was the time the missile was in the air. Now, potentially, we don’t know when the breach happens, but we have an infinite amount of time to respond.
A: Yes, the timeline is extended out in both directions. Much of Cold War deterrent strategy was determined basically by physics, the roughly 30-minute window that it took a ballistic missile to arc across the globe and hit another continent. We had to be able to prove that we could hit that missile within that window. Proving that it could be done deterred the other side from a preemptive strike.
In contrast, in this space, in the corporate sector the average time between when a victim is attacked and when they know they’ve been attacked is between 160 to 205 days, depending on the study.
But today your ability to respond doesn’t have to happen within that window. It also doesn’t even have to be a like response. Today, if you cyberattack me, I can respond either through cyber means or I could use all of my other tools of power to find your leverage points.
As an example: Russia. I would argue that their pressure points are a rickety economy with oligarchic structures. It’s the 13th largest economy in the world and falling. So go after that.
You also have to keep in mind that you’re setting examples that everyone else is going to watch. That is again another of these downsides to us just looking the other way to arguably what is the most important cyberattack in history. And I don’t say that lightly.
By looking away, by not responding, to the Russia campaign against our election, we are telling Russia, “Hey, this works for you.” And as we can see they’ve moved on to similar tactics against our allies, with examples ranging from Great Britain to Norway to the current elections in France, Germany, and the Netherlands. But also, in addition to Russia, other actors out there are watching and learning. One of the underlying lessons of deterrence is that it’s not about punishment for punishment’s sake, it’s about influencing the other actors.
Q: Are we coming to a place where we define cyberattacks as war?
A: I’d put it this way: “cyberwar” is as abused a term as “war” is. We use war to describe everything from a state of armed conflict between nations, like World War II, to a state of strategic competition between nations that never turns to outright conflict, like the Cold War, to political campaigns against everything from poverty to sugar.
I don’t like to use the term cyberwar unless you’re talking about the classic definition of a state of armed conflict where there is actual violence, in which the internet is used not just to steal information but to conduct attacks that would have physical damage. That’s what we’re talking about in war.
But where I especially don’t like to see it is when people say, “Oh, the OPM breach, that was an act of war.” No, no it wasn’t. It was stealing information. Nations have always stolen information. No nation has ever gone to war over stealing information.
Others will say, “It’s a facility breach, North Korea conducted an act of war!” No, they attacked Sony, they outed its secrets but it’s not in the same category as when North Korean troops murdered, in plain sight, U.S. troops with axes in the 1970s and we didn’t go to war with them over that. And you’re saying that we would go to war over the fact that now we can read Angelina Jolie’s emails? We need to be precise in what we talk about, what we care about, and what we’re trying to stop.
Q: You’ve mentioned that hacking is an entry point to hearts and minds. In the past, the sphere of attack was mostly governments. But this is now something that individuals feel and that individuals have the ability to respond.
A: Yeah, I think you’re combining two different things there. The argument back in the Cold War was that the individual was not the target and had no great ability to contribute to the defense. No matter how hard you tried, you were not going to be able to dig a good enough bomb shelter in your backyard that would mean that Russia would count it as a reason not to attack.
In cybersecurity, there’s a lot going on, but individuals matter. They are often what are being attacked and, importantly, can undertake a set of cyber hygiene measures that will go an incredibly long way to preventing, deterring and discouraging those attacks.
The media needs to take a long, hard look in the mirror. I think it’s fascinating to compare how the media handled the Ashley Madison breach with how it handled the Sony and DNC breaches.
Cyber hygiene won’t solve everything, but—whether we’re talking about political leaders, army officers or someone working at a furniture company—if we raise our game, if we don’t click that link we ought not to, we make the attacker’s job so much harder. We make it so much more difficult for them to succeed.
This is a space where you can build what’s called deterrence by denial or resilience. This is different but related to what you were asking about. During the Cold War, nations tried to subvert and undermine the politics and culture of their adversaries. We’re seeing that play out today, but through digital means. Russia is far more successful than they ever dreamed would be possible back in the Cold War. Sometimes it’s overt on social media, and other times there is a link to the cybersecurity side. So the cyberattacks and the influence operations are related but different.
To give an illustration related to campaigns: Political campaigns have long been targeted by hackers. In the 2008 and 2012 elections, the Obama, McCain, and Romney campaigns were all targeted by foreign state actors that wanted to penetrate and gain insights into what these campaigns and, more importantly, the people who were in them and were going to move into government, were thinking about policy. That’s always happened.
The difference in 2016 is that instead of merely just stealing the secrets, they were then exposed in a way that was designed to embarrass the target. The DNC breach had less in common with the OPM breach than it did with, for example, the Ashley Madison and Sony breaches. The hack was not to steal, the hack was to influence others.
So what are things we can do. They range from setting up better means to defend our elections, not just the voting machines as usually is talked about, but the political organizations that were actually the target. And learn from the past. Back in the Cold War, they created a group to respond to Soviet information warfare called the Active Measures Working Group. It was an interagency group that essentially identified Russian misinformation campaigns so that we could then respond to them. The Soviet Union, for example, was spreading false stories like we were using the 1984 Olympic Games as a way to secretly spread AIDS and stuff like that. We need a similar entity right now that can identify those misinformation campaigns online, out them, and allow us to respond to them.
Such an effort would be important not just in dealing with what Russia is doing, but it will also debunk the activity of what in Russian translates as ‘useful idiots’—aka people inside our own society, who are happy to spread lies and misinformation.
But it also involves actors outside of government, who have to take a long and hard look at themselves. That is everything from the social media companies that need to understand that their platforms are being used to take advantage of their customers. We’ve seen bot campaigns that swing from target to target, from trying to influence the Brexit Campaign to trying to influence American voters. Obviously the social media company can respond.
Also, the media needs to take a long, hard look in the mirror. I think it’s fascinating to compare how the media handled the Ashley Madison breach with how it handled the Sony and DNC breaches. Ashley Madison was about people cheating on their spouses. For the most part, the media reported the breaches but not the fruits of the crime, the individuals and what they were doing.
What we are seeing is a competition more akin to the Cold War’s pre-digital battles, where you saw a cross between influence and subversion operations with espionage. That’s particularly true with what Russia has been up to. This means we need to take new approaches to deterrence, reflecting the dual goal of both preventing an ongoing conflict from escalating, but also responding and better defending ourselves.
But when it came to Sony and the DNC the media reported the breach as well as the fruits of the crime. You could say that these two incidents involved people of public interest. Sorry! There were people like that in the Ashley Madison set, too. “But Sony involved a state actor!” Oh, because it was a foreign government influence operation, you played a role in spreading the information they got?
If the media says these things get reported case by case, then they’re being inconsistent. My point is that this isn’t only a government issue.
Q: It really does seem to me that what’s different here is the role of individuals to prevent, or foil, these attacks by exercising internet hygiene or not forwarding Facebook posts that look like trash.
A: Exactly, it also goes all the way to the individual. Am I going to play a role in poisoning the system further? And this includes learning to be more discerning. Just because it’s on Facebook doesn’t make it true.
Q: You’ve talked about punishment, having an anti-propaganda agency, but you’ve also talked about a kind of CDC that looks at breaches, reinforcing standards and metrics, offering bug bounties, and other things. Do you have a favorite in there?
A: No, because we need a wide array of activities. I think that there’s a feeling of helplessness, what can we do? Actually, there’s a set of identifiable actions we can do. And, importantly, as my testimony laid out, they’re non- or bipartisan. This doesn’t have to be a R vs. D space. To give an example, during the Obama Administration, a report identified the best practices in the private sector that could be brought into government. Now that we have a Republican president and Congress, they could adopt these suggestions. Ideas from business to aid government is a very Republican theme, so use them. Alternatively, being strong on national defense and deterrence and standing up to Moscow is in the very pedigree of the party of Reagan and Eisenhower. So, do it and be within your own party’s best traditions. The point is that there are a range of things we can and should be doing. History, however, is going to judge us on whether we act or not.
Q: Why won’t they act? Why hasn’t there been more action?
A: Good question.
Peter W. Singer is an expert on 21st century security issues at the New America Foundation, where he is a strategist and senior fellow. He has written many books on security issues and is the author (with August Cole) of the thriller, Ghost Fleet: A Novel of the Next World War.
This essay is part of an Inquiry, produced by the Berggruen Institute and Zócalo Public Square, on what war looks like in the cyber age.