ISO Rejects NSA Encryption Algorithms

The ISO has decided not to approve two NSA-designed block encryption algorithms: Speck and Simon. It’s because the NSA is not trusted to put security ahead of surveillance: A number of them voiced their distrust in emails to one another, seen by Reuters, and in written comments that are part of the process. The suspicions stem largely from internal NSA…

The ISO has decided not to approve two NSA-designed block encryption algorithms: Speck and Simon. It's because the NSA is not trusted to put security ahead of surveillance:

A number of them voiced their distrust in emails to one another, seen by Reuters, and in written comments that are part of the process. The suspicions stem largely from internal NSA documents disclosed by Snowden that showed the agency had previously plotted to manipulate standards and promote technology it could penetrate. Budget documents, for example, sought funding to "insert vulnerabilities into commercial encryption systems."

More than a dozen of the experts involved in the approval process for Simon and Speck feared that if the NSA was able to crack the encryption techniques, it would gain a "back door" into coded transmissions, according to the interviews and emails and other documents seen by Reuters.

"I don't trust the designers," Israeli delegate Orr Dunkelman, a computer science professor at the University of Haifa, told Reuters, citing Snowden's papers. "There are quite a lot of people in NSA who think their job is to subvert standards. My job is to secure standards."

I don't trust the NSA, either.

from https://www.schneier.com/blog/

NSA Insider Security Post-Snowden

According to a recently declassified report obtained under FOIA, the NSA’s attempts to protect itself against insider attacks aren’t going very well: The N.S.A. failed to consistently lock racks of servers storing highly classified data and to secure data center machine rooms, according to the report, an investigation by the Defense Department’s inspector general completed in 2016. […] The agency…

According to a recently declassified report obtained under FOIA, the NSA's attempts to protect itself against insider attacks aren't going very well:

The N.S.A. failed to consistently lock racks of servers storing highly classified data and to secure data center machine rooms, according to the report, an investigation by the Defense Department's inspector general completed in 2016.

[...]

The agency also failed to meaningfully reduce the number of officials and contractors who were empowered to download and transfer data classified as top secret, as well as the number of "privileged" users, who have greater power to access the N.S.A.'s most sensitive computer systems. And it did not fully implement software to monitor what those users were doing.

In all, the report concluded, while the post-Snowden initiative -- called "Secure the Net" by the N.S.A. -- had some successes, it "did not fully meet the intent of decreasing the risk of insider threats to N.S.A. operations and the ability of insiders to exfiltrate data."

Marcy Wheeler comments:

The IG report examined seven of the most important out of 40 "Secure the Net" initiatives rolled out since Snowden began leaking classified information. Two of the initiatives aspired to reduce the number of people who had the kind of access Snowden did: those who have privileged access to maintain, configure, and operate the NSA's computer systems (what the report calls PRIVACs), and those who are authorized to use removable media to transfer data to or from an NSA system (what the report calls DTAs).

But when DOD's inspectors went to assess whether NSA had succeeded in doing this, they found something disturbing. In both cases, the NSA did not have solid documentation about how many such users existed at the time of the Snowden leak. With respect to PRIVACs, in June 2013 (the start of the Snowden leak), "NSA officials stated that they used a manually kept spreadsheet, which they no longer had, to identify the initial number of privileged users." The report offered no explanation for how NSA came to no longer have that spreadsheet just as an investigation into the biggest breach thus far at NSA started. With respect to DTAs, "NSA did not know how many DTAs it had because the manually kept list was corrupted during the months leading up to the security breach."

There seem to be two possible explanations for the fact that the NSA couldn't track who had the same kind of access that Snowden exploited to steal so many documents. Either the dog ate their homework: Someone at NSA made the documents unavailable (or they never really existed). Or someone fed the dog their homework: Some adversary made these lists unusable. The former would suggest the NSA had something to hide as it prepared to explain why Snowden had been able to walk away with NSA's crown jewels. The latter would suggest that someone deliberately obscured who else in the building might walk away with the crown jewels. Obscuring that list would be of particular value if you were a foreign adversary planning on walking away with a bunch of files, such as the set of hacking tools the Shadow Brokers have since released, which are believed to have originated at NSA.

Read the whole thing. Securing against insiders, especially those with technical access, is difficult, but I had assumed the NSA did more post-Snowden.

from https://www.schneier.com/blog/

Who is Publishing NSA and CIA Secrets, and Why?

There’s something going on inside the intelligence communities in at least two countries, and we have no idea what it is. Consider these three data points. One: someone, probably a country’s intelligence organization, is dumping massive amounts of cyberattack tools belonging to the NSA onto the Internet. Two: someone else, or maybe the same someone, is doing the same thing…

There's something going on inside the intelligence communities in at least two countries, and we have no idea what it is.

Consider these three data points. One: someone, probably a country's intelligence organization, is dumping massive amounts of cyberattack tools belonging to the NSA onto the Internet. Two: someone else, or maybe the same someone, is doing the same thing to the CIA.

Three: in March, NSA Deputy Director Richard Ledgett described how the NSA penetrated the computer networks of a Russian intelligence agency and was able to monitor them as they attacked the US State Department in 2014. Even more explicitly, a US ally­ -- my guess is the UK -- ­was not only hacking the Russian intelligence agency's computers, but also the surveillance cameras inside their building. "They [the US ally] monitored the [Russian] hackers as they maneuvered inside the U.S. systems and as they walked in and out of the workspace, and were able to see faces, the officials said."

Countries don't often reveal intelligence capabilities: "sources and methods." Because it gives their adversaries important information about what to fix, it's a deliberate decision done with good reason. And it's not just the target country who learns from a reveal. When the US announces that it can see through the cameras inside the buildings of Russia's cyber warriors, other countries immediately check the security of their own cameras.

With all this in mind, let's talk about the recent leaks at NSA and the CIA.

Last year, a previously unknown group called the Shadow Brokers started releasing NSA hacking tools and documents from about three years ago. They continued to do so this year -- ­five sets of files in all­ -- and have implied that more classified documents are to come. We don't know how they got the files. When the Shadow Brokers first emerged, the general consensus was that someone had found and hacked an external NSA staging server. These are third-party computers that the NSA's TAO hackers use to launch attacks from. Those servers are necessarily stocked with TAO attack tools. This matched the leaks, which included a "script" directory and working attack notes. We're not sure if someone inside the NSA made a mistake that left these files exposed, or if the hackers that found the cache got lucky.

That explanation stopped making sense after the latest Shadow Brokers release, which included attack tools against Windows, PowerPoint presentations, and operational notes -- ­documents that are definitely not going to be on an external NSA staging server. A credible theory, which I first heard from Nicholas Weaver, is that the Shadow Brokers are publishing NSA data from multiple sources. The first leaks were from an external staging server, but the more recent leaks are from inside the NSA itself.

So what happened? Did someone inside the NSA accidentally mount the wrong server on some external network? That's possible, but seems very unlikely. Did someone hack the NSA itself? Could there be a mole inside the NSA, as Kevin Poulsen speculated?

If it is a mole, my guess is that he's already been arrested. There are enough individualities in the files to pinpoint exactly where and when they came from. Surely the NSA knows who could have taken the files. No country would burn a mole working for it by publishing what he delivered. Intelligence agencies know that if they betray a source this severely, they'll never get another one.

That points to two options. The first is that the files came from Hal Martin. He's the NSA contractor who was arrested in August for hoarding agency secrets in his house for two years. He can't be the publisher, because the Shadow Brokers are in business even though he is in prison. But maybe the leaker got the documents from his stash: either because Martin gave the documents to them or because he himself was hacked. The dates line up, so it's theoretically possible, but the contents of the documents speak to someone with a different sort of access. There's also nothing in the public indictment against Martin that speaks to his selling secrets to a foreign power, and I think it's exactly the sort of thing that the NSA would leak. But maybe I'm wrong about all of this; Occam's Razor suggests that it's him.

The other option is a mysterious second NSA leak of cyberattack tools. The only thing I have ever heard about this is from a Washington Post story about Martin: "But there was a second, previously undisclosed breach of cybertools, discovered in the summer of 2015, which was also carried out by a TAO employee, one official said. That individual also has been arrested, but his case has not been made public. The individual is not thought to have shared the material with another country, the official said." But "not thought to have" is not the same as not having done so.

On the other hand, it's possible that someone penetrated the internal NSA network. We've already seen NSA tools that can do that kind of thing to other networks. That would be huge, and explain why there were calls to fire NSA Director Mike Rogers last year.

The CIA leak is both similar and different. It consists of a series of attack tools from about a year ago. The most educated guess amongst people who know stuff is that the data is from an almost-certainly air-gapped internal development wiki­a Confluence server­ -- and either someone on the inside was somehow coerced into giving up a copy of it, or someone on the outside hacked into the CIA and got themselves a copy. They turned the documents over to WikiLeaks, which continues to publish it.

This is also a really big deal, and hugely damaging for the CIA. Those tools were new, and they're impressive. I have been told that the CIA is desperately trying to hire coders to replace what was lost.

For both of these leaks, one big question is attribution: who did this? A whistleblower wouldn't sit on attack tools for years before publishing. A whistleblower would act more like Snowden or Manning, publishing immediately -- ­and publishing documents that discuss what the US is doing to whom, not simply a bunch of attack tools. It just doesn't make sense. Neither does random hackers. Or cybercriminals. I think it's being done by a country or countries.

My guess was, and is still, Russia in both cases. Here's my reasoning. Whoever got this information years before and is leaking it now has to 1) be capable of hacking the NSA and/or the CIA, and 2) willing to publish it all. Countries like Israel and France are certainly capable, but wouldn't ever publish. Countries like North Korea or Iran probably aren't capable. The list of countries who fit both criteria is small: Russia, China, and...and...and I'm out of ideas. And China is currently trying to make nice with the US.

Last August, Edward Snowden guessed Russia, too.

So Russia -- ­or someone else­ -- steals these secrets, and presumably uses them to both defend its own networks and hack other countries while deflecting blame for a couple of years. For it to publish now means that the intelligence value of the information is now lower than the embarrassment value to the NSA and CIA. This could be because the US figured out that its tools were hacked, and maybe even by whom; which would make the tools less valuable against US government targets, although still valuable against third parties.

The message that comes with publishing seems clear to me: "We are so deep into your business that we don't care if we burn these few-years-old capabilities, as well as the fact that we have them. There's just nothing you can do about it." It's bragging.

Which is exactly the same thing Ledgett is doing to the Russians. Maybe the capabilities he talked about are long gone, so there's nothing lost in exposing sources and methods. Or maybe he too is bragging: saying to the Russians that he doesn't care if they know. He's certainly bragging to every other country that is paying attention to his remarks. (He may be bluffing, of course, hoping to convince others that the US has intelligence capabilities it doesn't.)

What happens when intelligence agencies go to war with each other and don't tell the rest of us? I think there's something going on between the US and Russia that the public is just seeing pieces of. We have no idea why, or where it will go next, and can only speculate.

This essay previously appeared on Lawfare.com.

from https://www.schneier.com/blog/

Uber Uses Ubiquitous Surveillance to Identify and Block Regulators

The New York Times reports that Uber developed apps that identified and blocked government regulators using the app to find evidence of illegal behavior: Yet using its app to identify and sidestep authorities in places where regulators said the company was breaking the law goes further in skirting ethical lines — and potentially legal ones, too. Inside Uber, some of…

The New York Times reports that Uber developed apps that identified and blocked government regulators using the app to find evidence of illegal behavior:

Yet using its app to identify and sidestep authorities in places where regulators said the company was breaking the law goes further in skirting ethical lines -- and potentially legal ones, too. Inside Uber, some of those who knew about the VTOS program and how the Greyball tool was being used were troubled by it.

[...]

One method involved drawing a digital perimeter, or "geofence," around authorities' offices on a digital map of the city that Uber monitored. The company watched which people frequently opened and closed the app -- a process internally called "eyeballing" -- around that location, which signified that the user might be associated with city agencies.

Other techniques included looking at the user's credit card information and whether that card was tied directly to an institution like a police credit union.

Enforcement officials involved in large-scale sting operations to catch Uber drivers also sometimes bought dozens of cellphones to create different accounts. To circumvent that tactic, Uber employees went to that city's local electronics stores to look up device numbers of the cheapest mobile phones on sale, which were often the ones bought by city officials, whose budgets were not sizable.

In all, there were at least a dozen or so signifiers in the VTOS program that Uber employees could use to assess whether users were new riders or very likely city officials.

If those clues were not enough to confirm a user's identity, Uber employees would search social media profiles and other available information online. Once a user was identified as law enforcement, Uber Greyballed him or her, tagging the user with a small piece of code that read Greyball followed by a string of numbers.

When Edward Snowden exposed the fact that the NSA does this sort of thing, I commented that the technologies will eventually become cheap enough for corporations to do it. Now, it has.

One discussion we need to have is whether or not this behavior is legal. But another, more important, discussion is whether or not it is ethical. Do we want to live in a society where corporations wield this sort of power against government? Against individuals? Because if we don't align government against this kind of behavior, it'll become the norm.

from https://www.schneier.com/blog/

NSA Using Cyberattack for Defense

These days, it’s rare that we learn something new from the Snowden documents. But Ben Buchanan found something interesting. The NSA penetrates enemy networks in order to enhance our defensive capabilities. The data the NSA collected by penetrating BYZANTINE CANDOR’s networks had concrete forward-looking defensive value. It included information on the adversary’s "future targets," including "bios of senior White House…

These days, it's rare that we learn something new from the Snowden documents. But Ben Buchanan found something interesting. The NSA penetrates enemy networks in order to enhance our defensive capabilities.

The data the NSA collected by penetrating BYZANTINE CANDOR's networks had concrete forward-looking defensive value. It included information on the adversary's "future targets," including "bios of senior White House officials, [cleared defense contractor] employees, [United States government] employees" and more. It also included access to the "source code and [the] new tools" the Chinese used to conduct operations. The computers penetrated by the NSA also revealed information about the exploits in use. In effect, the intelligence gained from the operation, once given to network defenders and fed into automated systems, was enough to guide and enhance the United States' defensive efforts.

This case alludes to important themes in network defense. It shows the persistence of talented adversaries, the creativity of clever defenders, the challenge of getting actionable intelligence on the threat, and the need for network architecture and defenders capable of acting on that information. But it also highlights an important point that is too often overlooked: not every intrusion is in service of offensive aims. There are genuinely defensive reasons for a nation to launch intrusions against another nation's networks.

[...]

Other Snowden files show what the NSA can do when it gathers this data, describing an interrelated and complex set of United States programs to collect intelligence and use it to better protect its networks. The NSA's internal documents call this "foreign intelligence in support of dynamic defense." The gathered information can "tip" malicious code the NSA has placed on servers and computers around the world. Based on this tip, one of the NSA's nodes can act on the information, "inject[ing a] response onto the Internet towards [the] target." There are a variety of responses that the NSA can inject, including resetting connections, delivering malicious code, and redirecting internet traffic.

Similarly, if the NSA can learn about the adversary's "tools and tradecraft" early enough, it can develop and deploy "tailored countermeasures" to blunt the intended effect. The NSA can then try to discern the intent of the adversary and use its countermeasure to mitigate the attempted intrusion. The signals intelligence agency feeds information about the incoming threat to an automated system deployed on networks that the NSA protects. This system has a number of capabilities, including blocking the incoming traffic outright, sending unexpected responses back to the adversary, slowing the traffic down, and "permitting the activity to appear [to the adversary] to complete without disclosing that it did not reach [or] affect the intended target."

These defensive capabilities appear to be actively in use by the United States against a wide range of threats. NSA documents indicate that the agency uses the system to block twenty-eight major categories of threats as of 2011. This includes action against significant adversaries, such as China, as well as against non-state actors. Documents provide a number of success stories. These include the thwarting of a BYZANTINE HADES intrusion attempt that targeted four high-ranking American military leaders, including the Chief of Naval Operations and the Chairman of the Joint Chiefs of Staff; the NSA's network defenders saw the attempt coming and successfully prevented any negative effects. The files also include examples of successful defense against Anonymous and against several other code-named entities.

I recommend Buchanan's book: The Cybersecurity Dilemma: Hacking, Trust and Fear Between Nations.

from https://www.schneier.com/blog/

The NSA Is Hoarding Vulnerabilities

The National Security Agency is lying to us. We know that because of data stolen from an NSA server was dumped on the Internet. The agency is hoarding information about security vulnerabilities in the products you use, because it wants to use it to hack others’ computers. Those vulnerabilities aren’t being reported, and aren’t getting fixed, making your computers and…

The National Security Agency is lying to us. We know that because of data stolen from an NSA server was dumped on the Internet. The agency is hoarding information about security vulnerabilities in the products you use, because it wants to use it to hack others' computers. Those vulnerabilities aren't being reported, and aren't getting fixed, making your computers and networks unsafe.

On August 13, a group calling itself the Shadow Brokers released 300 megabytes of NSA cyberweapon code on the Internet. Near as we experts can tell, the NSA network itself wasn't hacked; what probably happened was that a "staging server" for NSA cyberweapons -- that is, a server the NSA was making use of to mask its surveillance activities -- was hacked in 2013.

The NSA inadvertently resecured itself in what was coincidentally the early weeks of the Snowden document release. The people behind the link used casual hacker lingo, and made a weird, implausible proposal involving holding a bitcoin auction for the rest of the data: "!!! Attention government sponsors of cyber warfare and those who profit from it !!!! How much you pay for enemies cyber weapons?"

Still, most people believe the hack was the work of the Russian government and the data release some sort of political message. Perhaps it was a warning that if the US government exposes the Russians as being behind the hack of the Democratic National Committee -- or other high-profile data breaches -- the Russians will expose NSA exploits in turn.

But what I want to talk about is the data. The sophisticated cyberweapons in the data dump include vulnerabilities and "exploit code" that can be deployed against common Internet security systems. Products targeted include those made by Cisco, Fortinet, TOPSEC, Watchguard, and Juniper -- systems that are used by both private and government organizations around the world. Some of these vulnerabilities have been independently discovered and fixed since 2013, and some had remained unknown until now.

All of them are examples of the NSA -- despite what it and other representatives of the US government say -- prioritizing its ability to conduct surveillance over our security. Here's one example. Security researcher Mustafa al-Bassam found an attack tool codenamed BENIGHCERTAIN that tricks certain Cisco firewalls into exposing some of their memory, including their authentication passwords. Those passwords can then be used to decrypt virtual private network, or VPN, traffic, completely bypassing the firewalls' security. Cisco hasn't sold these firewalls since 2009, but they're still in use today.

Vulnerabilities like that one could have, and should have, been fixed years ago. And they would have been, if the NSA had made good on its word to alert American companies and organizations when it had identified security holes.

Over the past few years, different parts of the US government have repeatedly assured us that the NSA does not hoard "zero days" ­ the term used by security experts for vulnerabilities unknown to software vendors. After we learned from the Snowden documents that the NSA purchases zero-day vulnerabilities from cyberweapons arms manufacturers, the Obama administration announced, in early 2014, that the NSA must disclose flaws in common software so they can be patched (unless there is "a clear national security or law enforcement" use).

Later that year, National Security Council cybersecurity coordinator and special adviser to the president on cybersecurity issues Michael Daniel insisted that US doesn't stockpile zero-days (except for the same narrow exemption). An official statement from the White House in 2014 said the same thing.

The Shadow Brokers data shows this is not true. The NSA hoards vulnerabilities.

Hoarding zero-day vulnerabilities is a bad idea. It means that we're all less secure. When Edward Snowden exposed many of the NSA's surveillance programs, there was considerable discussion about what the agency does with vulnerabilities in common software products that it finds. Inside the US government, the system of figuring out what to do with individual vulnerabilities is called the Vulnerabilities Equities Process (VEP). It's an inter-agency process, and it's complicated.

There is a fundamental tension between attack and defense. The NSA can keep the vulnerability secret and use it to attack other networks. In such a case, we are all at risk of someone else finding and using the same vulnerability. Alternatively, the NSA can disclose the vulnerability to the product vendor and see it gets fixed. In this case, we are all secure against whoever might be using the vulnerability, but the NSA can't use it to attack other systems.

There are probably some overly pedantic word games going on. Last year, the NSA said that it discloses 91 percent of the vulnerabilities it finds. Leaving aside the question of whether that remaining 9 percent represents 1, 10, or 1,000 vulnerabilities, there's the bigger question of what qualifies in the NSA's eyes as a "vulnerability."

Not all vulnerabilities can be turned into exploit code. The NSA loses no attack capabilities by disclosing the vulnerabilities it can't use, and doing so gets its numbers up; it's good PR. The vulnerabilities we care about are the ones in the Shadow Brokers data dump. We care about them because those are the ones whose existence leaves us all vulnerable.

Because everyone uses the same software, hardware, and networking protocols, there is no way to simultaneously secure our systems while attacking their systems ­ whoever "they" are. Either everyone is more secure, or everyone is more vulnerable.

Pretty much uniformly, security experts believe we ought to disclose and fix vulnerabilities. And the NSA continues to say things that appear to reflect that view, too. Recently, the NSA told everyone that it doesn't rely on zero days -- very much, anyway.

Earlier this year at a security conference, Rob Joyce, the head of the NSA's Tailored Access Operations (TAO) organization -- basically the country's chief hacker -- gave a rare public talk, in which he said that credential stealing is a more fruitful method of attack than are zero days: "A lot of people think that nation states are running their operations on zero days, but it's not that common. For big corporate networks, persistence and focus will get you in without a zero day; there are so many more vectors that are easier, less risky, and more productive."

The distinction he's referring to is the one between exploiting a technical hole in software and waiting for a human being to, say, get sloppy with a password.

A phrase you often hear in any discussion of the Vulnerabilities Equities Process is NOBUS, which stands for "nobody but us." Basically, when the NSA finds a vulnerability, it tries to figure out if it is unique in its ability to find it, or whether someone else could find it, too. If it believes no one else will find the problem, it may decline to make it public. It's an evaluation prone to both hubris and optimism, and many security experts have cast doubt on the very notion that there is some unique American ability to conduct vulnerability research.

The vulnerabilities in the Shadow Brokers data dump are definitely not NOBUS-level. They are run-of-the-mill vulnerabilities that anyone -- another government, cybercriminals, amateur hackers -- could discover, as evidenced by the fact that many of them were discovered between 2013, when the data was stolen, and this summer, when it was published. They are vulnerabilities in common systems used by people and companies all over the world.

So what are all these vulnerabilities doing in a secret stash of NSA code that was stolen in 2013? Assuming the Russians were the ones who did the stealing, how many US companies did they hack with these vulnerabilities? This is what the Vulnerabilities Equities Process is designed to prevent, and it has clearly failed.

If there are any vulnerabilities that -- according to the standards established by the White House and the NSA -- should have been disclosed and fixed, it's these. That they have not been during the three-plus years that the NSA knew about and exploited them -- despite Joyce's insistence that they're not very important -- demonstrates that the Vulnerable Equities Process is badly broken.

We need to fix this. This is exactly the sort of thing a congressional investigation is for. This whole process needs a lot more transparency, oversight, and accountability. It needs guiding principles that prioritize security over surveillance. A good place to start are the recommendations by Ari Schwartz and Rob Knake in their report: these include a clearly defined and more public process, more oversight by Congress and other independent bodies, and a strong bias toward fixing vulnerabilities instead of exploiting them.

And as long as I'm dreaming, we really need to separate our nation's intelligence-gathering mission from our computer security mission: we should break up the NSA. The agency's mission should be limited to nation state espionage. Individual investigation should be part of the FBI, cyberwar capabilities should be within US Cyber Command, and critical infrastructure defense should be part of DHS's mission.

I doubt we're going to see any congressional investigations this year, but we're going to have to figure this out eventually. In my 2014 book Data and Goliath, I write that "no matter what cybercriminals do, no matter what other countries do, we in the US need to err on the side of security by fixing almost all the vulnerabilities we find..." Our nation's cybersecurity is just too important to let the NSA sacrifice it in order to gain a fleeting advantage over a foreign adversary.

This essay previously appeared on Vox.com.

EDITED TO ADD (8/27): The vulnerabilities were seen in the wild within 24 hours, demonstrating how important they were to disclose and patch.

James Bamford thinks this is the work of an insider. I disagree, but he's right that the TAO catalog was not a Snowden document.

People are looking at the quality of the code. It's not that good.

from https://www.schneier.com/blog/