CircleCI announced insights and enhanced installation features to their self-hosted server offering.
CircleCI’s self-hosted server solution offers software engineering teams the ability to scale under load and run multiple services at once, all within an individual’s Kubernetes cluster and network, but with the full CircleCI cloud experience.
It increases privacy, efficiency, and collaboration across teams, which is especially useful for teams working in healthcare, finance, and other industries with high governance and compliance standards.
“Users across telecommunications, manufacturing, defense and other highly-regulated industries leverage Modzy to automate the deployment and monitoring of AI models across their organizations. Not only do we use CircleCI’s self-hosted solution as part of our own DevOps processes, but we’ve also integrated it with our MLOps pipelines to ensure easier and faster model deployment,” said Nathan Mellis, Head of Engineering, Modzy.
With CircleCI server 3.2, users will have access to additional installation options to further secure installation environments including HTTP proxy and SSL termination, along with expanded functionality that provides access to CircleCI’s insights API and larger resource classes.
CircleCI’s insights API, powered by the 2.5 million jobs CircleCI’s platform processes each day, provides a detailed overview of the health and usage of users’ repository build processes. Metrics provided include time-series data such as success rates, pipeline duration, as well as other pertinent information to make better engineering decisions.
Other benefits that CircleCI’s self-hosted server solution provides include:
- Enterprise-level security. Users can achieve the strictest security, compliance, and regulatory requirements with end-to-end control over their CircleCI installation.
- Powerful developer tools and functionalities. Teams operating behind their own firewall now have the ability to access CircleCI’s full cloud experience and latest CircleCI features, such as orbs, scheduled workflows, matrix jobs, and more.
- Maintenance and monitoring. Create a complete picture of your software delivery tools with integration into existing infrastructure monitoring solutions such as Datadog, Splunk, ELK stack, and more.
- Strong support for scale and performance. Operate at scale under heavy loads and automatically leverage multiple core services at once within private networks. Ensure deployment redundancy that turns P0s into p1s and keeps teams building.
- Flexible hosting options. Users can run their installations on Google Kubernetes Engine (GKE), Amazon Elastic Kubernetes Service (EKS), or a native Kubernetes installation.