Confluent, the event streaming platform pioneer, announced the launch of elastic scaling for Apache Kafka.
The company unveils the first of this series of new innovations that accelerate how companies can harness the full power of event streaming at any scale. The Elastic release sets new levels of elasticity that were never before possible in event streaming without extensive Kafka expertise or large teams of developers working around the clock.
Companies can now instantaneously harness the power of event streaming to meet the real-time urgency and unprecedented levels of uncertainty for businesses today.
These past few months have completely reshaped how businesses interact with customers. Applications have become many companies’ sole touchpoints, making business success even more reliant on exceptional digital experiences.
Adapting and scaling these applications to quickly meet customer demand is increasingly important, but also increasingly difficult as most applications are built on rigid data architectures that span legacy technology and modern cloud environments.
In addition to technical challenges, businesses also end up either wastefully overspending on resources they never or rarely use or get caught off guard by unexpected traffic surges, causing poor application performance and downtime. The only way to build an application that can scale on demand and operate cost efficiently is with elastic, cloud-based architecture.
“Elasticity is a fundamental property of cloud data systems and our first step in Project Metamorphosis is bringing elastic scaling to Kafka and it’s ecosystem in Confluent Cloud,” said Jay Kreps, co-founder and CEO, Confluent.
“This is particularly important in uncertain times like this where we see many of our customers needing to suddenly scale up the digital side of their business as that becomes their primary channel of serving their customers.”
Kafka has risen as the de facto standard for event streaming with thousands of enterprises using it, including more than half of the Fortune 100. Although it has widespread adoption, many companies are not able to realize their full potential through real-time event streaming because of the level of Kafka expertise and developer power needed. The original creators of Kafka founded Confluent to eliminate these roadblocks.
The Elastic release unveils new levels of elasticity that were never before possible in event streaming through innovations in Self-Balancing Clusters planned for a future Confluent Platform release; as well as on-demand cluster provisioning and expansion and a usage-based billing model for all of Confluent Cloud. As a result, organizations are empowered to use event streaming no matter the scale of the use case, budget of the project, or experience with Kafka.
Remove the operational burden of balancing clusters
When organizations make event streaming a central platform for all their data, use cases can quickly grow to manage thousands of topics and produce trillions of messages a day.
As new topics are added and old brokers are retired, it requires teams to reassign partitions and balance workloads to avoid overloading clusters and causing failures. Reshuffling massive loads of data can be daunting and tedious, especially as these platforms are connected to critical applications.
To make this dramatically easier, Confluent plans to release Self-Balancing Clusters in Confluent Platform, which automates resource workload balancing, provides failure detection and self-healing, and allows customers to easily add and decommission brokers.
With Confluent Platform, companies self-managing Kafka can focus more resources on building exceptional real-time experiences with less risk and manual effort.
Elastically scale Apache Kafka on demand
Scaling data architecture on demand is essential for handling unexpected spikes in traffic and avoiding application lag or downtime. However, most distributed computing architectures, including open source Apache Kafka, are incredibly challenging to scale because of the amount of custom coding and deep level of technical expertise needed to connect every data source.
A new innovation in Confluent Cloud solves this issue by enabling anyone to instantly provision and expand Dedicated clusters with a few clicks. This builds on the elasticity already available in Basic and Standard clusters that instantly scale up and down between 0-100 MBps.
Serverless properties like these, across the entire Confluent Cloud solution, enable companies to grow and shrink production-level use cases on demand without having to issue support tickets or go through the complex process of manual resizing, saving significant amounts of time and resources.
Operate efficiently by paying only for data streamed
Because organizations want to be prepared for high-traffic moments and future growth, they often end up paying for peak provisioned capacity, which can be up to 10x higher than what’s actually consumed. With greater scrutiny over costs, the pay-as-you-go model associated with born-in-the-cloud data systems has become the new standard.
Confluent Cloud was engineered with pay-for-what-you-use in mind and Confluent is extending this model to every part of its cloud service. Customers can now commit to spend a certain amount across the entire cloud portfolio, including fully managed stream processing with Confluent Cloud KSQL and Confluent’s fully managed Kafka connectors, and pay only for the resources they use. As a result, organizations can operate event streaming with greater cost efficiencies for any budget.
Established innovations in Elastic scaling
The innovations announced join a host of features already available across Confluent Cloud, Confluent Platform, and open source Kafka, and are designed to make event streaming truly elastic. These innovations include:
- Auto-scaling for Basic and Standard clusters – The only cloud-native Kafka service that offers elastic scaling for production workloads from 0-100 MBps and down instantaneously. This automates hours typically spent sizing or provisioning clusters whenever workloads shift.
- Scale to zero with no hourly compute price on Basic – Kick-start event streaming quickly in Confluent Cloud and only pay for what is streamed with no minimum commitments. This enables developers to get apps to market faster with limited resources and no vendor lock-in.
- Confluent operator – Run Confluent Platform and Kafka as a cloud-native system on Kubernetes to automate deployment and key lifecycle operations. This overcomes the manual addition and removal of Kafka brokers, Connect workers, and other components of the Confluent Platform so the environment scales instantaneously and more elastically.
- Tiered storage (preview) – By separating data storage from data processing, Tiered Storage makes storing huge volumes of data in Kafka more manageable and easier to scale each independently. With more data stored in Kafka, it unlocks new use cases such as using Kafka as a system of record, training machine learning models with historical streams of data, and running year-over-year analytics.
Currently, Apache Kafka uses Apache ZooKeeper to store its metadata such as the location of partitions and the configuration of topics are stored outside of Kafka itself, in a separate ZooKeeper cluster.
However, this concept of external metadata management forces system administrators to manage two systems rather than one, inevitably leading to different configuration systems, management interfaces, and security settings. This leads to an unnecessarily steep learning curve, inefficiencies, and is the biggest bottleneck in Kafka’s scalability.
To make event streaming more scalable for everyone, the Apache Kafka community has made tremendous progress toward the removal of ZooKeeper (KIP-500). Once complete, Kafka will be able to support millions of partitions with better resilience and faster failover times.
Confluent will continue to make new product announcements as a part of Project Metamorphosis through the rest of the year. On the first Wednesday of every month, Confluent will unveil a set of capabilities that address the major technical challenges organizations face when putting event streaming at the heart of their business, laying the foundation to make pervasive use of event streaming possible for any organization.