Wednesday, November 28, 2012

Blogging: inconsistent quality or frequency?

One of my tasks at my company is to create blog entries, both on internal and external blogs. (See some of my other posts for links to what I've written.) I've noticed an interesting difference between writing for myself and writing for my company:

When I write for myself, my output is determined by when I get an idea that I want to share. Results are created by inspiration.

When I write for my company, my output is determined by our schedule. If I don't have an idea, I get one from our marcom organization, and I work at it until it makes sense, even if I personally don't see a lot of value in the content. Conversely, if I have an idea which is burning inside my head, I have to fight to get it included in our publishing schedule, with the occasional exception as a special post.

There are good reasons for scheduled output: when trying to generate a follower base, predictable frequency makes it easier for readers to know when something new will go up. It also means that, for infrequent readers, there will almost always be something new to read.

However, it's hard to train my brain to get inspired on demand. A lot of us know this feeling, and there's a fabulous write-up about it from The Oatmeal. In short, content on a schedule leads to inconsistency of output quality.

It's tempting to put together a bogus formula which asserts that quality ideas are a non-linear, which can lead either to inconsistent output frequency or to inconsistent output quality. I'll leave that as an exercise to the reader.

However, I'll also point out that there's a disconnect in perceived quality between the information producer vs the information consumer. Neil Gaiman expressed this eloquently in his Make Good Art speech. This also appears to be a common experience: I write something that I think is awful, and other people love it, and vice-versa.

Given this disconnect, I feel comfortable for now working with a content production frequency schedule. My internal discomfort at writing can spur me to produce output that's much better than I'd anticipated. It's also useful to get ideas for technical blog posts from people who have a different understanding of technology that I do, because it triggers the instinct to correct the misinformation - which leads to output on a schedule.

That being said, I still don't like the process, but I'm okay with the results.

P.S. Some readers of this post will notice that the title of this post is a question. When I started writing, I had a completely different title, but I changed it part-way through, with the implication that it's an either-or. Now, I look at it and realize that I'm compliant with Betteridge's Law, and the answer to the question is in fact "No".

Monday, August 13, 2012

IPv4 "Offshore Account" Predictions

IP addresses are a necessary resource for using the Internet, especially for service providers like web hosting companies.  Given that unused addresses are becoming scarce, I predict that we'll start seeing businesses invest in Latin America and in Africa specifically to acquire IP addresses there.


The setup

On July 31 this year, ARIN adopted a new policy to allow inter-region transfers of IP address allocations.  That may be news to some people, that IPv4 addresses aren't like normal property that can be bought and sold at will.  According to ARIN CEO John Curran, this is because "how we use [...] IP addresses affects all networks". Interestingly, as of August 8, the only other regional internet registrar (RIR) with a compatible policy is APNIC, which reached address exhaustion on April 19, 2011.

A quick note on address exhaustion: this means they have less than 1 /8 block left, not that they are completely out of addresses.  A /8 potentially contains 65536 /24 blocks - although that number will be smaller if an organization can convince the RIR to allocate a larger block than a /24.  APNIC currently has 0.9183 /8 blocks, which roughly translates to about 60000 /24 blocks.

Why is it important that APNIC and ARIN have compatible regional transfer policies?  That means that it's possible to move IP address allocations between them.  Right now, the obvious motive is to ease the IP crunch in APNIC from ARIN, which has 3.4561 /8 blocks.  However, on the FAQ page for this new policy, ARIN states that "There are inter-RIR transfer policy proposals in several other regions at the moment".  Assuming that other RIRs have similar mentality to ARIN, it's likely that ARIN is establishing a policy now to allow for inbound IP transfers once ARIN reaches exhaustion in early 2013.


The sources

Current projections for remaining IP blocks are available in this nifty gadget courtesy of INTEC, although its numbers differ from the data provided by Internet guru Geoff Huston.


The big question is what ARIN will consider a necessary part of the policy to be "compatible".  ARIN's transfer policy imposes a 12-month before-and-after waiting period on transfer sources within the ARIN region: the source must have had the IP addresses for over 12 months before the transfer, and can't receive any more addresses from ARIN for another 12 months after the transfer.  However, the policy also states that "Source entities outside of the ARIN region must meet any requirements defined by the RIR where the source entity holds the registration."


The possibilities

1. Direct IP exporters

If another region has a much less restrictive policy, there's the possibility of a new business model for a company in that region to apply for IP address blocks, then sell them.  It's a centuries-old practice for a developing nation to sell its raw resources to overseas buyers.

2. Foreign shell companies

If other regions are planning to adopt strict restrictions like ARIN on source organizations for IP transfer, the logical step is for global organizations to found shell companies right now in the LACNIC and AfriNIC regions.  Then, once the hold-down timers expire - e.g. the shell company has had its IP addresses for 12 months or whatever the local policy is - then the parent company would either initiate a transfer, or just acquire the shell under ARIN's mergers and acquisitions policy.

3. Foreign providers

If an address transfer isn't feasible, the next likely business model would be for "boutique" hosting companies to spring up in the LACNIC and AfriNIC areas.  Bandwidth prices there are high but falling, leading to the potential for hosting companies there to rent out destination IP addresses, or, more importantly, blocks of IP addresses.  

To reduce bandwidth usage (and reduce costs), there are techniques available both at the IP layer and at higher layers.

Within IP address advertising, there's no current technical restriction to prevent geographic relocation via BGP advertising, in effect becoming a semi-legitimate use of IP Hijacking.  Even anti-hijacking technologies like RPKI could be co-opted, either via disabling (like the current status of the anti-spam SPF technology) or by simple delegation within the PKI.

Within higher layer protocols like HTTP, techniques like redirects are a time-tested method of providing a fixed landing point with dynamically located content.  The hosted site would contain just enough information to pull the real content from a CDN or other external high-bandwidth low-cost source.


The timeframe

Given the 12-month limitation from ARIN for transfers, plus the projection of ARIN address exhaustion only 6-12 months away, look for large organizations to start this kind of "IP address offshoring" very soon.  In fact, given that the newly adopted ARIN policy was first proposed in February 2011, it's likely that some global organizations have already started this process.

Just for grins, check out this map of Chinese investment in Africa - then remember that China is in APNIC, and has long been short on IP addresses.

Monday, June 4, 2012

Flame - something only a government could build?

The NY Times (June 3 2012) published an article saying that Kaspersky Labs has declared Flame to be something only a government could build.  I disagree.

Look at the purported history of Stuxnet.  The most comprehensive story so far comes from David Sanger - again in the NY Times - saying that it was a project started under Bush-43, continued under Obama-44, co-developed with Israel's Unit 8200.  The larger story implies that the development was broken down into modules coded separately, with the programmers potentially unaware of the nature of the project.  This sounds a lot like the development process of the virus in Neal Stephenson's 1992 novel Snow Crash, in which (ironically) the US Government is the contractor creating the virus for the villain.  The Government was chosen as the development partner because they are the only organization paranoid enough not to trust their programmers with any "big picture" view.

Flame is a bloated beast, 20MB in size, with business logic in Lua calling compiled C++ modules.  While there's nothing unusual about that structure - even large web sites are using hybrid combinations of languages like Scala and Java - it does add credence to the idea that this is code that was not written by one person, nor potentially even one organization.  Flame could easily have been developed according to "Snow Crash" paranoid secret separation of information principles.

We've heard statements before about some things only being possible by government-level operations.  My favorite example was when members of the l0pht testified before Congress about potentially taking down the Internet in 30 minutes.  Government reaction was astonishment - they'd assumed that no private nor commercial organization had such capabilities, and had to go back and re-write their threat models.

In short: while Flame is a sophisticated toolkit for malware, it's not something that only a government could build.  However, the way that it's put together is the way that a government would build it.

In researching this post, I finally found some information that hadn't been exaggerated bouncing around the media echo chamber in a BBC News article that actually quotes someone at Kaspersky Labs. Vitaly Kamluk, "chief malware expert" per the BBC, is quoted thusly:
"Currently there are three known classes of players who develop malware and spyware: hacktivists, cybercriminals and nation states. Flame is not designed to steal money from bank accounts. It is also different from rather simple hack tools and malware used by the hacktivists. So by excluding cybercriminals and hacktivists, we come to conclusion that it most likely belongs to the third group."
 My reading of this statement is that Flame seems consistent, based on its actions, with something that isn't consistent with non-government malware.  However, that headline won't sell news stories.

Monday, April 30, 2012

What the Hyperspeed "Bullet Time" guys might be thinking

Background: "Bullet Time" article in New Scientist

When this article hit Twitter, lots of us thought that it was waving-of-hands security, no more worthy of contemplation than the Evil Bit joke RFC.  However, like the Evil Bit, there might be some deeper truth to what they're proposing.

The journal article which contains the proposal focuses on creating low-bandwidth high-speed links within MPLS networks.  In that sense, it's an example of RFC 1925 truth 11, re-introducing an old idea to solve a new problem.  Prioritization of network control traffic was built into the TOS flags in IP, which were later replaced with DSCP.  It's not a bad idea, just one which has never found its "killer app", so has not had widespread adoption.  (Whether QoS has widespread adoption is a discussion left as an exercise to the reader.)

The basic concept of "Hyperspeed", the name given to the proposal, is that network control type traffic be pre-provisioned into existing infrastructure.  The New Scientist article focuses on a security use case in which cloud-based traffic examination (I'm putting words into their mouth here, they gracefully avoid using the c-word) detects potentially malevolent traffic and pre-warns the recipient network of a potential attack.  However, I suspect that the NS article suffers from the same media hype and distortion which surrounds a lot of technical discussion.

Taken on its own, there is a reasonable argument for this type of approach.  Adding additional layers of network defense can interfere with traffic.  Whether we security folks like it or not, speed trumps security when providing services.  Therefore, if it is economically feasible to inject high-speed low-latency analysis at a centralized point upstream of the target network, it would be useful to have a coordination protocol which automatically triggers deeper levels of analysis.

Anyone who has deployed a blocking IDS will immediately recognize that tuning the system is tricky.  Thresholds must be localized for the target network to avoid false positives blocking legitimate connections.  It might therefore be useful to have pre-set blocking tiers, and when the network terror level rises from Cookie Monster to Bert, either dynamically reconfigure the IDS or simply change the packet path to traverse additional layers like Riverhead Networks used to do.

The problem with Hyperspeed, based on what little we know so far, is that it provides yet another tool that doesn't fully solve the problem.  It uses upstream detection to metaphorically set the Evil Bit.  Downstream consumers of this information would theoretically be in a position to apply their own local policies in response to a potential attack.  However, this pass-the-buck security model relies on those downstream targets having their own expertise and automated systems.  Anyone who might find value in an early warning is arguably already prepared, and anyone who has no defense plan will only hear "hey, maybe something bad might happen," which is always true when connected to the Internet.

In short, don't look for this proposal to go anywhere outside of research networks.  The ideas will likely be implemented soon, but the idea is nothing new.

Wednesday, April 25, 2012

"Internet Doomsday" modest proposal

Most Both readers of this blog likely know about the "DNS Changer" malware which, appropriately enough, changed the DNS settings of the systems it infected.  Now those DNS servers are being run by the FBI, who are planning to shut them down July 9 2012 (after pushing the date back at least once).  The concern is that anyone using those DNS servers will not be able to resolve IP addresses, thus making the Internet "go away" for them.

It seems like there's a simple solution here, using a common (albeit unpopular) technique.  Many ISPs, when their DNS is queried for a non-existent address, will return a fake response which, through a hand-waving combination of DNS and HTTP, redirects the user's browser to a web page at that ISP.  Those pages typically say "That site doesn't exist, did you mean this other one, and BTW here are some ads."  This same technique is also used very effectively by OpenDNS for internet filtering.  (The good kind, not the evil kind.)

So... what if the FBI set up a PSA (Public Service Announcement) captive portal solution using this technology?  It's easy enough to set up: cache client IP addresses with a 1-hour sliding timer, i.e. each DNS query resets the clock.  If the client IP is in the list, send the query to the real DNS.  If the client IP isn't in the list, forward the query to the PSA DNS with a 0 TTL.  The browser will load & display the PSA page.

The PSA page should include a brief description of what happened - malware blah blah cleaned blah blah make this change before July 9 blah blah click here to see the official FBI page - as well as directions on re-setting the DNS to its proper value, most likely the DHCP DNS setting.  At an obvious location in the page, there's a link to the website the user was trying to visit.  (Easy to rebuild using the host: HTTP header and the page request.)  Click the link, the client DNS resolves via the real DNS, and the user merrily goes onto the Internet.

Standard traffic stat tools, like the DNS server log, should show whether it's working: each client IP should show up in the web server access log, and over time, the number of clients served should drastically decrease.

It seems like someone is missing something obvious: is it the FBI, or is it me?