TL;DR Continued AI code agent integrations bring Cursor and Anthropic Console deeper into the Cycode platform giving security and engineering leaders the visibility they need to govern how AI is actually being used to build software.
AI coding agents have moved from experiment to infrastructure. Developers are using Claude, Cursor, and a growing list of agentic tools to generate, refactor, and ship code at a pace human-centric security was never designed to keep up with. The prompt has replaced an entire skill set, and the SDLC has now become the ADLC, also known as the Agentic Development Lifecycle.
The problem most security teams face isn’t whether AI is being used in their codebase. It’s that they have no idea how much, by whom, with which models, or to what effect.
That changes today.
What’s New with Cycode + Cursor + Anthropic
We’re introducing two deep integrations and a set of platform capabilities that give you a single source of truth for AI code agent usage across your organization:
Cursor Integration
Cycode now connects directly to Cursor to surface the full picture of how your developers are working with the tool from individual developer activity and adoption patterns to model selection, agent versus chat workflows, and how often AI-generated suggestions are actually being accepted into the codebase. The result is a complete view of Cursor usage across your organization, not just a fragment of it.
Anthropic Console Integration
For teams using Claude through the Anthropic Console, Cycode brings the same depth of visibility, developer-level activity, model usage across sessions, how agents are interacting with your environment through tool use, and the patterns that emerge when those signals are viewed together. Whether your developers are running Claude in agentic workflows or conversational ones, you see it.
Both integrations sync daily, and on initial connection Cycode pulls the last 30 days of historical data, so you get a meaningful baseline from day one, not a blank dashboard you have to wait a month to populate.
A New AI Developers Usage Page
Under Dev Activity → AI Dev Activity in the left navigation, you’ll find a new dedicated page listing every developer active with AI code agents during the selected date range. Usage is aggregated across every tool they touched, so you can stop stitching together exports from three different vendor consoles. Click any developer row to drill into a detailed card with a full per-agent breakdown.
New Widgets on the AI Visibility Dashboard
Security and engineering leaders aren’t being asked for raw metrics in board meetings and architecture reviews, they’re being asked questions. The new widgets on the AI Visibility dashboard are built to answer them directly.
- “Is AI adoption actually growing inside our engineering org, or has it plateaued?”
-
-
- The active developers over time widget shows you the trajectory of agentic adoption whether you’re scaling, stalling, or seeing pockets of growth concentrated in specific teams.
-
- “How much of our codebase is now AI-authored?”
-
-
- Lines generated over time gives you a defensible answer, tracked continuously rather than estimated quarterly.
-
- “How much is Claude specifically being used across the org?”
-
-
- The total commits widget for Claude isolates Claude-driven activity, so you can quantify the footprint of a single agent rather than lumping all AI tools together.
-
- “Which assistants are our developers actually choosing?”
-
-
- Lines per coding assistant and developer usage by code assistant cut through the assumption that “everyone uses Cursor” or “everyone uses Claude.” The reality is usually more fragmented, and the data shows you exactly where.
-
- “Which models are our developers calling, and is that consistent with our governance policy?”
-
-
- Model usage distribution surfaces which models are doing the work critical for compliance, cost, and risk conversations.
-
- “Who are the power users, and who is adopting AI most effectively?”
-
- Top developer leaderboards by lines generated and acceptance rate identify both the developers driving the most AI volume and the ones whose suggestions are landing two very different signals that together tell you where AI is genuinely accelerating delivery.
Why This Matters Now
You can’t govern what you can’t see and right now, most security teams can’t see who in their org is using which AI coding agent, how often, or with which models. Every other governance decision downstream of that gap is a guess. Policies get written without knowing which tools are actually in use. Budgets get approved without knowing which models are doing the work. Risk reviews get filed without knowing which developers are doing 10x of their output through an agent and which haven’t touched one.
Control in the agentic era starts with visibility, knowing which AI tools your developers are using, which models those tools are calling, and how much of your codebase is now AI-authored. Without that baseline, every other governance decision is a guess.
These integrations with Cursor and Anthropic are the front door to that baseline. They turn shadow AI into known AI. They turn anecdotes about “everyone is using Cursor now” into measurable, defensible data. And they feed directly into the rest of Cycode’s Agentic Development Security Platform (ADSP), meaning the lines generated by an AI agent on Tuesday can be correlated against the SAST findings on Wednesday, the pipeline activity on Thursday, and the runtime exposure on Friday.
That’s the difference between a usage report and an actual governance program.
Now with Context Across the Whole AI Layer
Standalone usage stats are interesting. Usage stats correlated to risk are decisive.
Because these integrations feed into Cycode’s Context Intelligence Graph, AI code agent activity becomes part of the same connected fabric as your code, pipelines, dependencies, secrets, and runtime. You can ask questions like:
- Which developers are generating the most AI code, and is that code passing through our SAST and secrets controls?
- Which models are most associated with risky patterns or rejected tool proposals?
- Where are AI-generated changes flowing into production systems, and who owns them?
For CISOs, this is the visibility layer that makes board-level reporting on AI risk possible. For AppSec teams, it’s the signal that tells you where to focus AI-aware guardrails. For engineering leaders, it’s a clearer picture of where AI is actually accelerating your team, and where it isn’t.
Available Today
These deeper Cursor and Anthropic Console integrations, the AI Developers Usage page, and the new AI Visibility dashboard widgets are available to all Cycode customers today. To enable them, head to the integrations section of your Cycode tenant and connect Cursor and Anthropic Console to start syncing.
Agentic development is here. Now you can see it, measure it, and govern it. Want to bring this level of visibility to your organization?
