Few government agencies have taken a reputation hit in recent times as big as that currently diminishing the National Security Agency (NSA). And while many in the tech industry were distrustful of the NSA before, there was at least an admiration for their prowess in cyberwarfare.
I have to say the agency has dropped a prowess notch in my opinion, if a recent New York Times article is to be believed. The focus of the article is how the political trouble in which the NSA finds itself doomed a proposal, championed by Gen. Keith B. Alexander, the director of the National Security Agency and head of the Pentagon’s Cyber Command. Details about the plan are only implied and the article makes no attempt to evaluate its merits.
What is the plan? From the Times:
Under this proposal, the government would latch into the giant “data pipes” that feed the largest Internet service providers in the United States, companies like A.T.&T. and Verizon. The huge volume of traffic that runs through those pipes, particularly e-mails, would be scanned for signs of anything from computer servers known for attacks on the United States or for stealing information from American companies. Other “metadata” would be inspected for evidence of malicious software.
It sounds like firewall/IDS/DLP for the whole United States of America. The notion is ridiculous. Many of the Internet pipes they speak of monitoring are OC-192’s with capacity of almost 10Gbps. Not even the NSA can scan all that traffic for potentially malicious content and stolen intellectual property, all the while keeping up to date with threat intelligence.
Of course, any large organization or ISP attempts to do the same thing, on a much more modest and practical scale, using one (or more) of many commercial products. Perhaps they don’t have all the same intelligence about attacks and the addresses of malicious actors as the NSA — although they might — but that doesn’t mean the NSA can do anything constructive with such an operation.
We know that the NSA is inspecting some of this traffic already, but only a small percentage of it and for a much smaller problem scope. Effectively detecting “evidence of malicious software” requires mapping the traffic into a context, not just doing a pattern match. This multiplies the required computing resources.
And then there’s the fact that they wouldn’t be in a position to read any encrypted traffic. Even a large percentage of the e-mail traffic they are already sniffing is encrypted. Perhaps the NSA has the computing resources, with some time, to decrypt some traffic which they target for scrutiny. Such resources are best saved for high-value intelligence rather than ordinary intrusion detection.
The Times article says that the program actually goes a lot further than detection:
The worst malware could be blocked before it reaches companies, universities or individual users, many of whom may be using outdated virus protections, or none at all. Normal commercial virus programs are always running days, or weeks, behind the latest attacks — and the protection depends on users’ loading the latest versions on their computers.
Oh, really? They’re going to block malware at the border? This is as bad as when they portray computers on “Law and Order” or “NCIS”. Inevitably the systems can perform magical tasks with minimal effort (and inevitably the operator of the computer is a beautiful woman wearing glasses, but I digress…)
Later on the article makes some more reasonable observations about the value defense contractors involved in an NSA pilot proposal sharing intelligence saw in the program. One of the standard proposals made to justify greater government involvement in “cybersecurity” is to increase the sharing of security intelligence… as if this isn’t happening already on a large scale.
It does happen. Everyone already talks to everyone else and shares information about malicious sites and malware through a variety of channels, some open some private. Anyone who can fly under the radar of today’s best systems can probably get past the NSA’s border firewall. Like I said, all you need to do is encrypt the traffic and route it through a trustworthy address. For a high value attack this is a small price to pay.
Like so many anonymously-sourced articles, this one stinks of thinly-hidden agendas. Someone is trying to protect their program budget. Another possibility, even less comforting than budget politics, is that it’s a ruse to gain increased access to Internet traffic for other intelligence purposes. These may well be reasonable intelligence purposes, but I wish they wouldn’t insult us by claiming it’s all part of a magic Internet defense shield.
Security for the Internet is probably best left to the private companies that operate it and who have every incentive to do their best to protect it, and who can’t get away (for long) with claims on which they can’t deliver.
Source Article from http://www.zdnet.com/the-nsas-phony-national-firewall-proposal-7000019651/
Views: 0