Is network monitoring dead?

Network monitoring is dead, says the CEO of cPacket Networks; that is, unless network monitoring solutions become agile enough to deliver real-time visibility, while keeping up with the increasing complexity, volume, and speed of network traffic in virtualized data center and cloud application delivery environments.

Today’s legacy monitoring solutions cannot address the demands of modern networks because their architecture was designed decades ago and was not conceived to handle large scale problems that arise in large-scale data centers and high-speed networks. In simple terms, the legacy architecture relies on aggregating all the traffic and analyzing it after-the-fact in a centralized location. The increasing volume and speed of network traffic makes this legacy approach a “bottleneck by design.”

Such a bottleneck implies lower operational agility, restricted visibility, and slow response time to situations that require corrective actions, because the operation teams do not have proactive situational awareness and real-time access to true facts.

Explained Rony Kay, founder and CEO of cPacket, “The traditional monitoring architecture is like the cashier in a struggling legacy retailer. At legacy retail stores, during busy times the centralized cashier is a bottleneck that causes customers to stand aimlessly and wait. However, in an Apple App store, the approach is more agile and distributed, every employee at the store can help you with your needs and take your payment from anywhere on the floor, instantly.” This distributed model delivers higher customer satisfaction and enables more efficient utilization of space and time.

Traditional monitoring solutions aim to aggregate all the traffic from across the network for centralized post-processing during which potential issues can be unearthed. In contrast, cPacket’s distributed approach performs, in real-time, the heavy lifting of inspecting every packet and every flow on the fly, while sending only relevant events for reporting. Kay calls this distributed model “Pervasive Network Intelligence” which is physically distributed and virtually centralized. This distributed approach to network monitoring is inherently more effective for large complex environments.

Kay suggested that network professionals assess whether they need more agile network monitoring solutions by considering a few questions:

  • Is it easy enough to immediately and conclusively resolve intermittent problems that negatively impact business activities and users’ quality-of-experience?
  • Is timely, consistent, and proactive situational awareness available based on granular performance and health indicators?
  • Is it possible to drive capacity planning and traffic engineering based on granular information about temporal behaviors like spikes and jitter?
  • Is it possible to interactively search network traffic in real time to find telltale signs of imminent issues and problems like distributed denial of service?
  • Do you have the information agility needed to optimize your operational efficiency and infrastructure utilization?

Don't miss