- Virtualization is leading to larger numbers of individually addressable nodes. Each server is capable of hosting multiple guest images. Therefore, manageable addresses are no longer limited by number of physical devices, but only by the speed at which servers can be deployed. Given that there is strong market push in this direction, deployment is receiving increasing amounts of automation, allowing almost arbitrary MAC (moves, adds, changes) – sometimes completely automated.
- Virtualization is leading to increased specialization. Guest images are templatized to deploy services quickly: instead of building a server by installing an OS, patches, applications, and localized configuration, do it once, then clone it as needed. The overhead of a separate OS (and >= 1 IP address) per guest is outweighed by the simplicity of decreased administrative and management cost.
- Increased demand for performance has led to horizontal scaling and clusters, whereby additional speed, capacity, etc. can be added just by adding more servers. Web app too slow? Just spin up another compute node! Conversely, increased demand for cost savings by reduction of power (and cooling) during periods of decreased demand is leading to dynamic power management, turning off computers when not in use. Coupled with virtualization’s ability to add specialized servers dynamically, it is inevitable that fully automated deployment and retirement of servers will become commonplace.
Historically, we would have handled this with "meaningful" IP addresses: pre-allocate a range of addresses per application. Anything with that address is "known" to have that purpose. However, a massive proliferation of dynamic changes makes pre-allocation difficult, as any range by definition has a fixed number of addresses, and it will be harder to predict how many addresses will be necessary - especially if they're being added and retired automatically.
Filesystems could be analogous to IP address allocation. Windows users are long-accustomed to occasional de-fragmentation of the hard drive for speed, since blocks of large files don't always get stored consecutively. Many newer filesystems (especially on Linux) are avoiding the defrag problem by embracing it: don't save a file at the first available location, save it way out on the disk to try to allow for file growth. Returning to IP, current practice has us looking for contiguous blocks of addresses for a given set of related systems, but address fragmentation will occur despite our best efforts. If address management becomes fully automated, with abstracted tools to provide access to each IP node, then the allocation headache goes away. Network management will (eventually) embrace it.
For large address pools, e.g. desktops/laptops/users/etc, DHCP has been the way to go. I believe that DHCP will be the de-facto solution for server-type addresses. Spin up another front-end server in the cluster, give it a DHCP address, and programmatically add the address to the load balancer via the API. For back-end, many cluster solutions already support auto-add. From a network perspective, it will work reliably and quickly - and that's the #1 criterion. However, from a security perspective, there's very little chance of being able to make a firewall change with the same automation. Even if it's technically supported, making security changes is "scary", so it won't fly politically.
Firewall policies are already too big to manage. If contiguity of addresses goes away, the policies become an order of magnitude larger if the policy is centrally managed.
So... what's the solution? I see a couple of things as possible:
- Abstraction of policies. Remove references to IP addresses from the firewall rules, and let the system figure out what the actual elements are. This method tiptoes past the unease about automated firewall policy changes by hiding the dynamic elements behind a reassuringly static view of the policy. Is it a believable enough reassurance?
- Dispersed policies. VMware has a few tools which apparently allow for firewalling of each VM guest individually (VMsafe API and vShield 2.0). The implication is that a centralized firewall is less necessary, as each guest has its own firewall. My concern is that managing 1 firewall per VM may not be any easier than managing a central policy.
- Deployment solutions like HyTrust, which limit VM guest deployment to particular VM hosts. IMHO it's a related but different problem, which essentially creates multiple DMZ networks, minus the network security aspect.
No comments:
Post a Comment