Access the on-demand sessions from the 2026 Product Security Summit Watch Now →

You Can’t Secure What You Can’t See: How Cycode Maps Every AI Tool in Your SDLC

user profile
Co-Founder & CPO

Your developers are using AI. All of them. The question isn’t whether—it’s which tools, which models, which MCPs, and where.

The Shadow AI Problem Is Bigger Than You Think

97% of organizations lack visibility into how and where AI is being used across their software development lifecycle.

The disconnect has a name: Shadow AI.

Shadow AI in software development goes well beyond a developer pasting code into ChatGPT. It’s structural. Developers are configuring AI rule files in repositories. They’re connecting MCP servers to their IDEs. They’re pulling models from Hugging Face, OpenAI, and Anthropic across dozens of repos. They’re embedding AI-powered packages as dependencies. And none of this shows up in your existing security tooling.

You can’t write a policy for a tool you don’t know exists.

How Cycode Detects AI Across the SDLC

Instead of relying on developers to self-report their tool usage, Cycode analyzes the signals that AI tools leave behind in your SDLC—automatically, continuously, and across every repository under management.

Every AI tool that touches your codebase leaves fingerprints. Here’s how Cycode finds them.

Signal #1: Commit Message Scanning

AI coding assistants leave traces in commit history. Cycode scans commit messages across all repositories to identify AI-assisted development. In practice, this surfaces entries like a Co-Authored-By: Claude Opus 4.6 <[email protected]> tag in your backend API repo – evidence, extracted directly from source control, that an AI model is contributing code to a production codebase.

Commit Message Scanning

But it goes beyond co-author tags. Cycode also identifies AI bot users operating across your repositories. These bots appear in your source control as legitimate users, but they represent AI-driven activity that most security teams lack visibility into.

Signal #2: AI Rule File Detection

Modern AI assistants allow developers to define behavioral rules via configuration files committed into repos: .cursor/rules/*.mdc, .github/copilot-instructions.md, .gemini/config.yaml, CLAUDE.md, .cursorrules, and others. These files specify how the AI should generate code within that project.

Cycode automatically discovers and inventories every AI rule file across your codebase. In one environment, Cycode can aggregate the number of rule files across your repositories. More on why these files matter in a dedicated section below.

AI rule file

ai rule file

Signal #3: AI Skill File Detection

Assistants like Claude support skill files—reusable task definitions that teach the AI to execute specific workflows. Cycode catalogs these automatically, giving security teams insight into how AI is being operationalized in each repository.

how AI is being operationalized in each repository.

Signal #4: MCP Server Configuration Analysis

MCP servers connect AI assistants to external services—GitHub, Atlassian, Notion, Figma—giving the AI direct access to your tooling and data. Cycode detects MCP configurations in files like mcp_config.json and .cursor/mcp.js, extracting the provider, transport type, protocol version, and every repository and developer associated with each server.

One example: Cycode discovered a GitHub MCP server across 12 repositories through 8 file-pattern evidence paths and 7 coding-assistant-hook paths—spanning backend-api, frontend-app, and infra-terraform. This is covered in depth below.

Cycode discovered a GitHub MCP server

Beyond static file analysis, Cycode monitors signals from coding assistant integrations directly. When a developer uses Cursor or Claude Code, Cycode’s hooks capture the activity and correlate it with specific developers, repositories, MCPs, and models.

Cycode monitors signals from coding assistant integrations

Deep Dive: What’s Hiding in Your Rule Files?

AI rule files are one of the most consequential – and least understood – artifacts in modern codebases. They deserve closer scrutiny than most security teams are giving them.

What Are AI Rule Files?

When a developer creates a .cursor/rules/secure-dev-python.mdc file and commits it to a repository, they’re writing instructions that the AI assistant will follow every time it generates or modifies code in that repository. The file might specify: {please generate here a fake and not cycode rule file}

But here’s the security-relevant question: who’s writing these rules, and are they correct?

Why Security Teams Should Care

Rule files are unsigned, unreviewed, and execute implicitly. There’s no approval gate, no PR review requirement, no policy framework around them. Any developer can commit a rule file that fundamentally changes how AI generates code in a shared repository.

Consider the risk scenarios:

Malicious rule injection. A compromised developer account—or a supply chain attack on a shared repository—could introduce a rule file that instructs the AI to embed backdoors, disable input validation, or use weak cryptographic patterns. The AI would silently comply, generating insecure code that looks perfectly normal to human reviewers.

Conflicting or contradictory rules. In one repository, Cycode detected Cursor rules, Copilot instructions, Gemini configs, and Claude rules—four different AI assistants with potentially conflicting guidance. If one rule file says “always use parameterized queries” and another doesn’t mention SQL injection at all, the security posture depends on which assistant the developer happens to use that day.

Stale or abandoned rules. Rule files committed months ago may reference outdated patterns, deprecated libraries, or insecure defaults. Unlike dependencies that trigger SCA alerts when they age out, rule files sit silently in the repository with no expiration mechanism.

Incomplete coverage. A rule file might enforce secure coding patterns for one language or domain—but completely ignore others in the same repository. For example, a repository might have detailed security rules for Python development, but contain no guidance at all for Infrastructure as Code (IaC) files like Terraform configurations. The AI will generate hardened Python code while simultaneously producing insecure infrastructure definitions—in the same repo, under the same developer’s watch.

The Gaps Between Rules and Reality

The risk scenarios above are theoretical. But in practice, Cycode surfaces something even more concerning: the gap between what’s configured and what’s actually happening.

Here are the kinds of questions AppSec teams should be asking—and what Cycode reveals when they do:

  1. “Which repositories have no AI rule files at all?” If rule files define the guardrails for AI-generated code, then repositories without them have no guardrails at all. Cycode can query across your entire codebase and surface every repository without a rule file – these are your highest-risk blind spots: AI is operating, but entirely ungoverned.

  2. “Are the right assistants respecting the right rules?” A repository might have rule files configured for one AI assistant, but the actual code contributions are coming from a completely different one. Cycode can detect this mismatch—for instance, surfacing a repository that contains rule files for one coding assistant, while commit history shows code being co-authored by a different AI model entirely.

  3. “Are AI tools contributing to repos they weren’t expected in?” Cycode can reveal that a repository has rule files configured for one assistant, but branch-level analysis shows contributions from an entirely different AI agent—one that was never formally adopted or approved for that project.

What About Skill Files?

Skills take this a step further. While rule files define constraints (“don’t do X”), skill files define capabilities (“here’s how to do Y”).

That’s powerful automation—and it’s defined entirely in a markdown file committed to a repository. Skills effectively teach AI assistants to perform operational tasks: deploying services, running infrastructure commands, modifying configurations.

Without visibility into what skills exist across your organization, you have no way to assess whether AI assistants are being granted capabilities that exceed appropriate boundaries.

What Cycode Does With This

Cycode doesn’t just detect rule and skill files—it links them to the repositories and AI assistants they govern and surfaces them in the AIBOM so security teams can review, approve, or flag them.

Using Cycode’s knowledge graph, you can query: “Show me every repository that contains an AI Rule File”—and instantly see results.

every repository that contains an AI Rule File

cake graphs

Deep Dive: MCP Servers—The New Shadow Integration Layer

If AI rule files are the policies that guide AI behavior, MCP servers are the access layer that determines what AI can reach. And they represent a fundamentally new category of integration risk.

What Are MCP Servers?

The Model Context Protocol (MCP) is an open standard that allows AI coding assistants to connect to external tools and services. An MCP server acts as a bridge: it provides the AI assistant with authenticated access to platforms such as GitHub, Atlassian, Notion, and Figma, as well as any service that exposes an MCP endpoint.

When a developer adds an MCP server configuration to their IDE or commits it to a repository, they’re granting their AI assistant the ability to read, query, and potentially write to that external service using the configured credentials.

The Risk Model

MCP servers introduce a transitive access problem.

The blast radius is wide. One MCP server, improperly configured, can expose data across multiple repositories and services simultaneously.

Credential scope is opaque. When a developer configures a GitHub MCP server in their IDE, what OAuth scopes does it have? Can it read private repositories? Can it create pull requests? Can it access organization-level secrets? Most developers don’t think about this—and most security teams can’t answer the question because they don’t know the MCP server exists.

MCP servers persist in code. Once committed to a repository, an mcp_config.json file will be cloned by every developer who checks out that repo. The MCP configuration effectively propagates across the team, often without explicit awareness.

New MCPs emerge constantly. The MCP ecosystem is growing rapidly. Atlassian, GitHub, Notion, Figma, and dozens of other services now offer MCP endpoints. Each new MCP server a developer connects is a new integration that your security team needs to evaluate—but won’t, if they can’t see it.

AI tools are being embedded into CI/CD pipelines. The risk isn’t limited to developer IDEs.
Cycode can detect AI tools being invoked directly within CI/CD workflows—for example, GitHub Actions workflow steps that reference MCP configurations or install and execute AI coding assistants programmatically. This means AI isn’t just assisting developers at their desks; it’s running autonomously inside your build pipelines, generating or modifying code as part of automated workflows.

What Cycode Does With This

Cycode automatically identifies every MCP server being used across your organization.

Each MCP is fully traceable through its evidence chain: from the server entity, through the configuration file where it’s defined, to the repository and owning organization.

Each MCP is fully traceable through its evidence chain

Each MCP is fully traceable through its evidence chain

The AI Bill of Materials: A Structured Inventory

Detection is the foundation. But visibility without structure is noise. Cycode’s AI Bill of Materials (AIBOM) organizes every detected AI component into a continuously updated, categorized inventory.

When you open Cycode’s AI & Machine Learning inventory, you see your entire AI landscape organized into six categories:

AI Infrastructures — Platforms for building and managing AI/ML workloads, including LLM gateways and orchestration frameworks.

AI Models — Machine learning models detected in your repositories, whether self-hosted or referenced from model hubs.

AI Code Assistants — Tools providing AI-powered code generation and completion within development workflows.

MCPs — Model Context Protocol integrations that allow AI models to interact with external tools and services.

AI Packages — Software dependencies and libraries used to integrate AI capabilities into applications.

AI Secrets — API keys, tokens, and credentials used to authenticate with AI services.

The AI Bill of Materials

AI Visibility: From Org-Wide Metrics to Individual Repositories

The AIBOM provides a categorized inventory. But security teams also need to answer two levels of questions: “How widespread is AI across my organization?” and “What exactly is happening in this specific repository?”

Cycode answers both.

The Big Picture: AI Adoption Metrics

Flat inventories tell you what exists. But security leaders need to understand patterns, concentration, and risk distribution. Cycode’s AI security dashboard goes beyond listing assets—it provides a statistical overview of your entire AI landscape: adoption rates across repositories, MCP distribution and exposure, model provider diversity and rule file coverage gaps. These are the metrics that turn raw visibility into strategic governance decisions: where to invest in policy, which teams need guardrails, and where risk is silently accumulating.

raw visibility

Per-Repository AI Visibility

The AIBOM provides the organizational view. But teams also need to answer: “What AI is in this specific repository?”

In Cycode’s repository inventory, you can filter by AI & ML to surface only repos with AI components. The filter breaks down further by subcategory: AI code assistants, AI models, AI infrastructures, AI packages, AI secrets, and MCPs.

Cycode playground , filter by AI & ML

ios-app

Graph Queries

Cycode’s knowledge graph supports cross-entity queries. “Find all repositories containing AI Rule Files” – each linked to their specific rule files in an aggregated view.

This is how you go from “do we have AI rule files?” to “which repositories, which assistants, and what do the rules say?” in seconds.

From Visibility to Governance

Comprehensive AI visibility enables a governance model that’s evidence-based rather than policy-by-assumption:

Define and enforce tool policies. Approve specific models, assistants, and MCPs.

Generate audit-ready AIBOMs. Export your complete AI inventory as a structured AIBOM document. When auditors or regulators ask “what AI tools are you using?”, the answer is a click away—not a weeks-long manual discovery effort.

Quantify AI attack surface. Understand exactly how many AI entry points exist in your environment: how many MCPs are active, how many models are invoked, how many AI secrets could be compromised.

Enable secure adoption. The goal isn’t to block AI—it’s to make AI adoption visible, governed, and aligned with your security posture. Developers get clear guardrails and an approved toolset. Security teams get evidence and control.


Shadow AI exists wherever there’s no visibility. Cycode provides that visibility—not through surveys or manual audits, but through continuous, automated detection of every AI signal in your SDLC.

If you can see it, you can govern it. If you can govern it, you can secure it.