The speed of innovation has officially outpaced the speed of traditional security. As AI moves from a future trend to the core of the Software Development Life Cycle (SDLC), security leaders are facing a new reality: the Velocity Paradox. How do you secure code that is being generated in seconds and deployed across a sprawling, AI-driven attack surface?
On January 28th, we brought together the brightest minds in cybersecurity for the 2026 Product Security Summit. From CISOs at global giants like Unity and Intermex to pioneers from Cloudflare and HackerOne, the message was clear: The era of siloed tools is over.
We call it The Great Convergence. It is the foundational unification of Application Security Testing (AST), Software Supply Chain Security (SSCS), and Application Security Posture Management (ASPM) into a single, context-aware platform. This shift is not just a market trend; it is the necessary evolution of Product Security for the AI Era.
Session 1 | CEO Remarks: The Great Convergence: Secure Software by Default in the AI Era
In the summit’s opening keynote, Lior Levy (Cycode) challenged the industry to look past the AI hype and address the fundamental structural flaws that have left enterprises vulnerable. He argued that the Velocity Paradox—the gap between rapid dev cycles and slow security reviews—cannot be solved by just tacking AI onto existing, broken tools.
Lior Levy (Cycode) introduced a reset in thinking: The Great Convergence. He explained that application security was never meant to be fragmented. For years, teams have operated with AST tools in one silo, supply chain tools in another, and ASPM as a separate layer. The “Convergence” model treats these as one complete, foundational layer of context. He emphasized that artificial intelligence isn’t truly intelligent until it understands the relationships between code, pipelines, dependencies, and risks. Without this specific environment awareness, AI remains a disconnected, artificial layer.
Notable Quote:
- “Application security was never meant to be fragmented to be effective. The Great Convergence is the path to moving from disconnected tools to one foundational layer of context.” — Lior Levy (Cycode)
Watch the session to learn how to move from fragmented tools to a unified foundation of context.
Session 2 | Vibe Engineering: Velocity Without the Vulnerabilities
As AI moves from a coding assistant to a primary driver of software development, a new phenomenon has emerged: Vibe Engineering. In this session, Amir Kazemi (Cycode) was joined by Nikola Dalcekovic (Schneider Electric), Brad Tenenholtz (Splyce.ai), and Katie Norton (IDC) to discuss a world where developers feel more productive than ever while security backlogs are growing 10x overnight.
The panel explored the concept of “Verification Debt.” When AI generates code in seconds, the primary bottleneck is no longer writing the code, but verifying its safety. Brad Tenenholtz (Splyce.ai) noted that while AI is great at syntax, it often introduces complex architectural flaws that traditional scanners miss. Nikola Dalcekovic (Schneider Electric) argued that if an engineer did not write the code themselves, a standard peer review loses its primary value, requiring security to shift toward automated AI-BOM tracking. Katie Norton (IDC) shared research showing a massive “automation bias,” where organizations over-trust their AI tools despite frequent security incidents.
Notable Quotes:
- “We often look for SQL injection, but that is not what an AI-generated vulnerability usually looks like. We are seeing complex architectural flaws, which are the hardest issues to fix.” — Brad Tenenholtz (Splyce.ai)
- “Software engineers are changing their craft. The new technical debt is actually verification debt, where the volume of code exceeds our ability to manually review it.” — Nikola Dalcekovic (Schneider Electric)
- “There is a massive automation bias today. 80% of organizations are confident in their ability to manage risk, even though 70% of them frequently encounter security issues.” — Katie Norton (IDC)
See how these organizations are managing the velocity paradox and tackling verification debt by watching the recording.
Session 3 | Fighting AI with AI: 10X Security Team Productivity
How do we build the 10x Security Team to keep pace with the 10x Developer? Devin Maguire (Cycode) sat down with Gaja Anand (formerly with Morgan Stanley) and Cassio Batista Pereira (StoneX) to map out the evolution of the security engineer.
The discussion focused on the “Agentic Security Engineer.” Gaja Anand (formerly with Morgan Stanley) argued that security must meet developers in their own AI-native environments by embedding security context directly into the agent’s rules. This requires a skill set shift where security practitioners become fluent in “agentic coding.” Cassio Batista Pereira (StoneX) warned of the “accountability trap,” noting that while AI can prioritize 10,000 alerts down to the ten that matter, it doesn’t solve the human problem of who is responsible for the final fix. He concluded that because AI learned to code from humans, it inherited our bad habits, making security hygiene more mandatory than ever.
Notable Quotes:
- “The 10x Security Engineer isn’t just someone with better tools; they are someone who has evolved their skill set to be fluent in how AI agents think.” — Gaja Anand (formerly with Morgan Stanley)
- “If you do not have a human-led filter, you have a recipe for disaster. When everyone trusts the result of a prompt blindly, that is where the problem starts.” — Cassio Batista Pereira (StoneX)
Learn how to evolve your security skill set and build an agentic security team by watching the full session.
Session 4 | Context-Driven Security: Connecting the Dots Across Code, Offensive Security, and Exposure Management
Prasad Raman (Cycode) hosted Kyle Metivier (HackerOne) and Adam Dudley (Nucleus Security) to discuss how modern teams unify fragmented signals. The core technical takeaway was the collapse of the “exploit timeline.” Kyle Metivier (HackerOne) explained that AI has empowered attackers to “chain” multiple low-severity findings into a production-grade exploit in hours, rendering static CVSS scores unreliable.
Adam Dudley (Nucleus Security) introduced the framework of “Exposure Reduction.” By mapping offensive ground truth (exploits seen in the wild) to the code-level visibility of Cycode, organizations can move from theoretical risk to “Mobilization.” This approach allows teams to identify which vulnerabilities are actually reachable and exploitable in their specific environment, reducing the noise and allowing developers to focus on the 1% of risks that truly matter.
Notable Quotes:
- “AI makes exploitation dynamic. The old mental model of fixing highs and criticals first breaks when three ‘lows’ can be chained into a critical exploit in hours rather than months.” — Kyle Metivier (HackerOne)
- “The integration of Cycode and Nucleus helps us connect enterprise security teams with their developers. Closing that gap helps security move from reactive noise to proactive risk reduction.” — Adam Dudley (Nucleus Security)
Watch the session to explore how to move from static risk scores to dynamic, reachable exposure reduction.
Session 5 | The AI Governance Playbook: The Differences Between Leaders and Laggards
Guillaume Montard (Cycode) distinguished the signal from the noise in AI governance with Daniel Hereford (Intermex), Rinki Sethi (Upwind Security), and Chris Peterson (Unity).
The panel identified the “Policy vs. Technical” divide. Rinki Sethi (Upwind Security) noted that “laggards” have robust policy committees on paper but zero technical visibility into where AI is actually running. Leaders, by contrast, can technically demonstrate policy enforcement in real-time. Chris Peterson (Unity) shared the “Yes, And” framework, where security becomes a partner in innovation by requiring three pillars for any AI project: a specific purpose, a bounded dataset, and an explicit output plan. Daniel Hereford (Intermex) emphasized that while compliance can mobilize resources, it is just the starting line for true product security maturity.
Notable Quotes:
- “The biggest red flag is when AI governance exists only at the policy level. If you cannot demonstrate how it shows up technically, the risk isn’t theoretical; it is material.” — Rinki Sethi (Upwind Security)
- “While compliance can mobilize security efforts, it is just the starting line. To create real impact, CISOs must move beyond checkbox exercises.” — Daniel Hereford (Intermex)
- “Fast is allowed, but blind is not. You have to give us visibility into what you are trying to do and what you plan to do with the outputs.” — Chris Peterson (Unity)
See how these CISO leaders are building technical guardrails for AI by watching the recording.
Session 6 | Securing New Attack Surfaces: From AI Workflows to AI Agents
As products evolve from simple chatbots to autonomous agents, the attack surface is shifting from code to “intent.” Devin Maguire (Cycode) sat down with Sarrah Bang (Alter Domus), Neil Bahadur, and Jimmy Xu (Cycode) to discuss the reality of securing agentic customer experiences.
Neil Bahadur explained that agents are non-deterministic, creating a “nightmare scenario” where an attacker doesn’t need a technical exploit—they just need to be “convincing” enough to trick a helpful agent into bypassing authorized boundaries. Sarrah Bang (Alter Domus) introduced the concept of “Least Agency,” arguing that we must strictly limit what an agent is authorized to do (like executing high-privilege tool calls) rather than just what it can see. Jimmy Xu (Cycode) highlighted that the AI supply chain—including malicious plugins and poisoned MCP servers—is the new low-hanging fruit for attackers looking to weaponize the SDLC.
Notable Quotes:
- “The agentic world brings all the joy of a social engineering attack surface inside your software. Abuse of a friendly, helpful agent is likely the most common exploit we will see in 2026.” — Neil Bahadur
- “Testing a seatbelt works at 40 miles per hour. But when AI moves development to 200 miles per hour, you have to rethink the entire safety mechanism to fill the gaps.” — Jimmy Xu (Cycode)
- “We should apply a new adaptation moving toward ‘least agency.’ Instead of just limiting what an agent can access, we must strictly limit what it is authorized to do.” — Sarrah Bang (Alter Domus)
Watch the session to explore the new model of Least Agency and how to secure autonomous AI workflows.
Session 7 | The Future of Product Security in the AI Era
Amir Kazemi (Cycode) and Guillaume Montard (Cycode) went under the hood of the next generation of security architecture. They argued that AI without context is simply noise that developers will eventually ignore. To be trusted, AI must be environment-aware.
This session introduced the Context Intelligence Graph, an AI-native substrate that turns raw visibility into autonomous understanding. By mapping every relationship across the SDLC, the graph allows Cycode to provide “Decision Traces”—explaining not just that a risk exists, but why it matters and how to fix it automatically. Guillaume Montard (Cycode) shared the 2026 roadmap, which includes the Multi-Agent Orchestrator designed to run the heavy lifting of security triage on its own.
Notable Quote:
- “Context doesn’t just come from data anymore; it comes from connected data. This is the difference between knowing something is vulnerable and understanding why it matters right now.” — Amir Kazemi (Cycode)
Session 8 | Securing APIs from (Cy)Code to Cloud(flare)
In our final session, Ronen Slavin (Cycode) and Bashyam Anant (Cloudflare) discussed the “Post-PR” security gap for APIs. Bashyam Anant (Cloudflare) revealed that state-sponsored actors are using Gen-AI to automate up to 90% of attack steps, specifically targeting “Shadow APIs” that security teams don’t even know exist.
To combat this, Cloudflare and Cycode demonstrated a “Positive Security” model. By closing the loop between development and the edge, the platform can automatically block any API request that does not conform to the developer’s intended schema. This turns the API from an open door into a hardened, validated gateway that automatically protects the application the moment code is deployed.
Notable Quote:
- “Expect exploits to be weaponized at an industrial scale the moment a CVE drops. Runtime defense buys you time, but it cannot be your only line of defense.” — Bashyam Anant (Cloudflare)
Learn how to close the gap between code and the edge by watching the final session recording.
Wrapping Up: Your Roadmap for the AI Era
The 2026 Product Security Summit made one thing certain: staying secure in the AI era is not just about buying better tools. It is about gaining complete visibility and fostering seamless convergence across your entire security posture.
By unifying AST, SSCS, and ASPM, organizations can finally move past the noise and focus on the risks that actually matter. This represents the most significant shift in Product Security for the AI Era.
Next Steps to Secure Your SDLC:
- Watch on Demand: Every session from the summit is available to watch in the sections above.
- Get Context: Explore the Context Intelligence Graph, our AI-native substrate designed to turn visibility into autonomous understanding.
- Download the Report: For a deeper look at the data, download the 2026 State of Product Security Report.

