Throughout history, individuals have taken innovations in their prime and tried to mold them into objects they were never designed to be. An example? The first cars were carriages with engines, the first powered ships were sailing ships with paddles, and so on.
That said, history has shown us that there are also many limitations to evolving objects outside of their intended purpose and that these efforts often end in failure.
Much like the flying car which never took off (no pun intended), repurposing existing technology often restricts innovation. Because repurposing objects for new, unique purposes isn’t innovative – it’s resourceful. It is not until someone steps forward and solves an existing problem from scratch, that the possibilities of real, ground-breaking thinking are revealed.
The problems imposed by repurposing unique innovations can also be seen in the world of network technology. Society has a history of taking a product that is perfectly designed for one intended purpose and diminishing its value by trying to make it do something it cannot or should not do. Why would you take a connectionless protocol like IP and then try and use it for a connection-oriented application like voice? Why would you use a watch to view TV? These are the questions that keep innovators up at night.
Repurposing a purpose-built network
In the “olden days,” data center networks were built using either token ring or Ethernet, depending on different protocols based on the application and use. The computers would connect to hubs with shared bandwidth and each subnet would subsequently connect to a router – essentially providing the collapsed backbone.
As network speeds increased from 10Mb to 100Mb, it was discovered that this model could not scale. Fortunately, this problem was unveiled just as the Ethernet switch was arriving, allowing some networks to scale more seamlessly by inserting switches at key points.
In time, it became obvious that router technology would not be able to keep up with new developments, thus a need for a new solution that could determine how to route on switch networks came about. This solution came in the form of an SDN-like approach, which tagged packets with a destination label on another subnet, allowing the packet to be delivered without touching the router.
After several years, almost all vendors had their own, unique approach to this – none of which were particularly elegant or easy to implement. As Ethernet switching became more prevalent, it became obvious that the complexity of these technologies would not scale.
After some time, a more obvious solution arrived – routing IP in silicon. After that, several notable things happened. Almost overnight, all of the SDN-style solutions disappeared, and IP over Ethernet became the standard for all communication between computers. This standardization allowed for massive innovation, which eventually paved the way for Wi-Fi, the World Wide Web, and perhaps most importantly, cat videos.
Defending against innovative attackers requires hyper-innovative thinking
Fast forward to the 2020s – organizations today are finding themselves in a similar position. There is a fundamental cybersecurity problem running amok in today’s IT, data center, and cloud environments: organizations are unable to stop the lateral movement of malware. In layman’s terms this means that data breaches, ransomware and cyber attacks today are getting first class access to organizations’ “crown jewels” just by bypassing a firewall or infiltrating a supply chain. The solution is to segment resources in the data center and cloud.
At the moment, companies are attempting to self-segment by leveraging network solutions to combat the threat problem at the network level. Yet, once again, organizations are turning to existing technologies to achieve network and cloud segmentation, resulting in both complex and unsatisfactory implementation and regulation methods. This is the flying car approach. The “try to make something work because it probably can” approach. But when it comes to cybersecurity, a flying car approach won’t cut it. Repurposing technology can be used to apply some coarse segmentation, but as soon as organizations try to do something more granular, they are experiencing problems.
Ideally, companies need to be able to define whitelist rules for each process within an application and control how connections are made, independent of their location or computing environment. At the network layer this is near impossible, so it must be executed within the workload. But the primary challenge here is achieving this with as little complexity as possible. Like switching to IP, the solution should be elegant and simple.
Organizations should use the intelligence in the workload operating system to enforce policies. Then, provide the intelligence to automate the creation of rules and provide complete visibility on all communication. Lastly, this process should be enhanced with the capability to conform to regulations by applying one-click encryption.
Much like in the mid-90s, we are once again at a technology tipping point where innovation can make an impact beyond its own world. By simplifying segmentation across the hybrid cloud, organizations are able to more easily remove one of the inhibitors to cloud adoption – opening the door to a plethora of new innovations and solutions. Who knows – by relying on organic innovation alone, we may just witness the introduction of the first flying car (though we may need to wait another 50 years for that technology to catch up). Let’s keep our fingers crossed – I’ll be catching up on my cat videos in the meantime.