CIS, Astrix, and Cequence partner on new AI security guidance
The Center for Internet Security, Astrix Security, and Cequence Security announced a strategic partnership to develop new cybersecurity guidance tailored to the unique risks of AI and agentic systems.
This collaborative initiative builds on the CIS Critical Security Controls (CIS Controls), extending its principles into AI environments where autonomous decision‑making, tool and API access, and automated threats introduce new challenges. The intent of the partnership includes initially developing two CIS Controls companion guides: one for AI Agent Environments, which will focus on securing the agent system lifecycle, and the other for Model Context Protocol (MCP) environments.
MCP environments introduce unique risks, including credential exposure, ungoverned local execution, unapproved third‑party connections, and uncontrolled data flows between models and tools. Together, these guides will provide targeted safeguards for organizations operating in environments where MCP agents, tools, and registries interact dynamically with enterprise systems.
“AI presents both tremendous opportunities and significant risks,” said Curtis Dukes, EVP and GM of Security Best Practices at CIS. “By partnering with Astrix and Cequence, we are ensuring that organizations have the tools they need to adopt AI responsibly and securely.”
Astrix’s contribution centers on securing AI agents, MCP servers, and the Non‑Human Identities (NHIs), such as API keys, service accounts, and OAuth tokens, that link them to critical systems.
“AI agents and the non‑human identities that power them bring great potential but also new risks,” said Jonathan Sander, Field CTO of Astrix Security. “Our focus is helping enterprises discover, secure, and deploy AI agents responsibly, with the confidence to scale. Through this partnership, we’re providing clear, practical guidance to keep AI ecosystems safe so organizations can innovate with confidence.”
Cequence brings years of enterprise application and API security experience to agentic AI enablement and security.
“As organizations embrace agentic AI, trust hinges on visibility, governance, and control over what those agents can see and do to your applications and data,” said Ameya Talwalkar, CEO of Cequence Security. “Security is strongest through collaboration, and this partnership gives organizations clear guidance to adopt AI safely and securely.”
How the partnership supports organizations
- Extends trusted cybersecurity frameworks into AI environments, addressing risks from autonomous systems and integrations.
- Delivers clear, prioritized safeguards that guide enterprises toward secure and responsible AI adoption.
- Combines expertise across standards, API security, and application defense to provide comprehensive protection.
The new guidance is scheduled for release in early 2026, accompanied by workshops, webinars, and supporting resources delivered jointly by CIS, Astrix, and Cequence. Together, the organizations aim to help enterprises translate recommendations into practice while building a stronger foundation of trust, transparency, and resilience across the AI ecosystem. By working from a shared framework, enterprises, vendors, and security leaders can align on a common language for securing AI environments.