Monday, April 30, 2012

What the Hyperspeed "Bullet Time" guys might be thinking

Background: "Bullet Time" article in New Scientist

When this article hit Twitter, lots of us thought that it was waving-of-hands security, no more worthy of contemplation than the Evil Bit joke RFC.  However, like the Evil Bit, there might be some deeper truth to what they're proposing.

The journal article which contains the proposal focuses on creating low-bandwidth high-speed links within MPLS networks.  In that sense, it's an example of RFC 1925 truth 11, re-introducing an old idea to solve a new problem.  Prioritization of network control traffic was built into the TOS flags in IP, which were later replaced with DSCP.  It's not a bad idea, just one which has never found its "killer app", so has not had widespread adoption.  (Whether QoS has widespread adoption is a discussion left as an exercise to the reader.)

The basic concept of "Hyperspeed", the name given to the proposal, is that network control type traffic be pre-provisioned into existing infrastructure.  The New Scientist article focuses on a security use case in which cloud-based traffic examination (I'm putting words into their mouth here, they gracefully avoid using the c-word) detects potentially malevolent traffic and pre-warns the recipient network of a potential attack.  However, I suspect that the NS article suffers from the same media hype and distortion which surrounds a lot of technical discussion.

Taken on its own, there is a reasonable argument for this type of approach.  Adding additional layers of network defense can interfere with traffic.  Whether we security folks like it or not, speed trumps security when providing services.  Therefore, if it is economically feasible to inject high-speed low-latency analysis at a centralized point upstream of the target network, it would be useful to have a coordination protocol which automatically triggers deeper levels of analysis.

Anyone who has deployed a blocking IDS will immediately recognize that tuning the system is tricky.  Thresholds must be localized for the target network to avoid false positives blocking legitimate connections.  It might therefore be useful to have pre-set blocking tiers, and when the network terror level rises from Cookie Monster to Bert, either dynamically reconfigure the IDS or simply change the packet path to traverse additional layers like Riverhead Networks used to do.

The problem with Hyperspeed, based on what little we know so far, is that it provides yet another tool that doesn't fully solve the problem.  It uses upstream detection to metaphorically set the Evil Bit.  Downstream consumers of this information would theoretically be in a position to apply their own local policies in response to a potential attack.  However, this pass-the-buck security model relies on those downstream targets having their own expertise and automated systems.  Anyone who might find value in an early warning is arguably already prepared, and anyone who has no defense plan will only hear "hey, maybe something bad might happen," which is always true when connected to the Internet.

In short, don't look for this proposal to go anywhere outside of research networks.  The ideas will likely be implemented soon, but the idea is nothing new.

Wednesday, April 25, 2012

"Internet Doomsday" modest proposal

Most Both readers of this blog likely know about the "DNS Changer" malware which, appropriately enough, changed the DNS settings of the systems it infected.  Now those DNS servers are being run by the FBI, who are planning to shut them down July 9 2012 (after pushing the date back at least once).  The concern is that anyone using those DNS servers will not be able to resolve IP addresses, thus making the Internet "go away" for them.

It seems like there's a simple solution here, using a common (albeit unpopular) technique.  Many ISPs, when their DNS is queried for a non-existent address, will return a fake response which, through a hand-waving combination of DNS and HTTP, redirects the user's browser to a web page at that ISP.  Those pages typically say "That site doesn't exist, did you mean this other one, and BTW here are some ads."  This same technique is also used very effectively by OpenDNS for internet filtering.  (The good kind, not the evil kind.)

So... what if the FBI set up a PSA (Public Service Announcement) captive portal solution using this technology?  It's easy enough to set up: cache client IP addresses with a 1-hour sliding timer, i.e. each DNS query resets the clock.  If the client IP is in the list, send the query to the real DNS.  If the client IP isn't in the list, forward the query to the PSA DNS with a 0 TTL.  The browser will load & display the PSA page.

The PSA page should include a brief description of what happened - malware blah blah cleaned blah blah make this change before July 9 blah blah click here to see the official FBI page - as well as directions on re-setting the DNS to its proper value, most likely the DHCP DNS setting.  At an obvious location in the page, there's a link to the website the user was trying to visit.  (Easy to rebuild using the host: HTTP header and the page request.)  Click the link, the client DNS resolves via the real DNS, and the user merrily goes onto the Internet.

Standard traffic stat tools, like the DNS server log, should show whether it's working: each client IP should show up in the web server access log, and over time, the number of clients served should drastically decrease.

It seems like someone is missing something obvious: is it the FBI, or is it me?