AI & Code Security; Accelerated Development Or Amplified Risks?

categories icon Webinar

AI is accelerating code production while simultaneously creating a ‘perfect storm’ for security.

Is AI-generated code introducing vulnerabilities at an exponential rate? Can your AppSec team keep up with detection and remediation? And are hackers leveraging AI to exploit vulnerabilities quicker? 

The truth is, AI is both a blessing and something that we need to keep alert of. It's speeding up code development and bringing incredible productivity gains, but also creating unprecedented security risks.

In this webinar, we tackle these key questions head on, to discuss the impact of AI on code security.

Plus, don’t miss our speakers sharing the ways you can harness AI as a force for good in your application security posture management program.

Our expert panelists discussed:

  • The rapid rise of AI-generated code and the critical security implications
  • How to defend against the AI-powered attacker
  • Harnessing AI in application security posture management programs

Presented by:

Jimmy Xu
Jimmy Xu
Field CTO
Amir Kazemi
Amir Kazemi
Director of Product Marketing
Derek Smith
Derek Smith
Practice Director of Azure, Trace3

Have questions or
want a custom demo?

Get a personalized demo and learn how you can develop secure software, faster with Cycode.

By submitting this form I agree to be contacted by Cycode, and receive occasional offers & product updates via phone or email in line with Cycode's Privacy Policy.
Transcription

Amir Kazemi:

Hey, everybody, so super excited to welcome you to this executive panel session from Cycode on AI and code security. My name’s Amir Kazemi. I’m the director of product marketing here at Cycode, and I’m super excited to be your host for today’s panel discussion. For those of you who are new to Cycode and our webinar series, welcome. We host a monthly series where we tackle the biggest questions that are top of mind for AppSec. And for those of you joining us again, we’re glad to have you back here again.

You’re probably all hearing a lot of noise about AI in general, also in security, but the truth is that AI is dramatically changing AppSec, and technically, we can’t really ignore it, right? So, the goal of today’s discussion is to really tackle some of the biggest questions and challenges on how AI is changing AppSec, so to leave you with a little clearer understanding of recommendations on building your program, mitigating AI-related risks also within the realm of software development. So, to help me get started, I’m delighted to welcome two really seasoned cybersecurity leaders and practitioners in the space, Jimmy Xu, as well as Derek Smith. So, let’s go ahead and get started with some intros here. So Jimmy, Derek, I’ll leave it to you two to give us some intros.

Derek Smith:

Yeah, Amir. No, pleasure. Thank you for having me. Greetings, everyone. My name is Derek Smith. I am a cloud practice director for Trace3. I’ve been working in cloud and the cloud security space for the last decade, helping clients transform their cloud operations, how best to secure and digitally transform their applications as they look to take advantage of everything the cloud has to offer. And I’ve been fortunate enough to work alongside the gentleman that I’ll be chatting with today. And I know Jimmy is phenomenal, so I’ll let him introduce himself.

Amir Kazemi:

Thanks, Derek.

Jimmy Xu:

Thanks, Derek. Jimmy here. I’m the field CTO for Cycode. Recently joined Cycode about a month and a half ago. So, been in the industry for 20 plus years. I’ve done pretty much everything from development work, to cybersecurity, to ops, DevOps, both public, private sector, lots of years consulting and solution work, so pretty much a practitioner for a long time. AI is dear heart to me. Like Derek said, my previous life, I got to work alongside Derek, really good times, helping our clients secure their cloud applications. I’m very excited to be here and have this conversation with the amazing team here.

Amir Kazemi:

Super glad to have you on board, Jimmy. Thank you. Just to kick us off, I wanted to start off around the topic of, just generally speaking, AI and software development. So, as we all know, AI is rapidly transforming software development. So, love to ask, how do you see this evolution impacting traditional software development and its life cycle? Do you have any thoughts there?

Derek Smith:

Yeah, I think AI is impacting software development in a way from two aspects. One, we’re seeing code generation triple, quadruple, just at an overall rate. And as we see more developers look to draft code for various features and pieces of an application, or an application as a whole, we’re seeing them rely a little bit more on AI, develop assistant tools, or things like we see GitHub Copilot, GitLab’s Duo, and others out there in the market. And so it’s really reshaping how developers and their work processes in terms of how they code, how effectively and efficiently they can produce code and address various things within the software development life cycle.

The flip side of that is we’re also seeing it being leveraged from a training aspect. A lot of developers are now leveraging these tools to help gain and extend their knowledge of their existing programming language that maybe they have a good amount of skill in, or to completely learn a new programming language and expand their skill set in which they can code in, right? So, we’re seeing developers who may be very proficient in Python, or Java, or C#, choose a different language and leverage these code assistant tools to expand and enhance their knowledge into other programming languages out there.

Amir Kazemi:

Yep, I love that.

Jimmy Xu:

Yeah. Well said, Derek. I mean, certainly, I will say I’ve seen nothing like it before, the way it’s transforming. I would say that additionally, at the org level, I work with many customers where if you think about the stakeholders who’s thinking about, call it their overall cloud strategy, their development strategy, their security strategy, many, they just think about AI, right? “Hey, let’s do something AI.” But most of the time, if the company was not a software company, sometime it take the org a little bit to think about, “Actually, maybe we weren’t before, but now maybe we are because of the low bar of entry to doing some development AI.” So, that’s another aspect of transformation, that it’s just people was not ready to embrace that. Hey, now every company should be treated as a software company.

Amir Kazemi:

Yep, yep.

Jimmy Xu:

One more thing I’m going to say is that supply chain, if we think about we’re assembling apps now moving to cloud, getting all the artifacts now in the AI development world, also seeing everybody talking about Hugging Face, right? So, I think that we have folks who are probably not used to building their organic frameworks models, so they will be using other people’s. So, it’s going to just heighten the supply chain as well for AI development.

Amir Kazemi:

Yep. It’s a great point, Jimmy, especially around the topic, around a lot of organizations that are not traditionally software first companies. Over the last decade, or even more than a decade, we’ve been talking about this whole notion around digital transformation. Some companies are still in the midst of their digital transformation journey, and some companies are already software first and they’re already ahead of the game in terms of adopting things, or tools, or processes within AI. So, really good point around that. Would love to learn a little bit more around your perspective around what do you think are the challenges and opportunities that this presents for organizations? When we adopt AI or even when we see AI being embedded into the organization, what are the challenges? What are the opportunities for these companies?

Jimmy Xu:

I can start this one. Challenges, well, you kind of mentioned already, is that you have way the amount of code, amount of new code, doesn’t matter if human-generated, AI-generated, produced by developers, who could be a developer for their entire career or developer for a day, right? What we call a citizen developer. So, there’s that aspect of amount and awareness of the coder.

Also, I think that the challenge is, obviously, I talk about a supply chain, but also the awareness of how to deal with it, right? You have AI development, but what are you building? Are you building language models? But that comes for attack surface. But also, if you’re just using any general AI development, there are other aspect of, okay, call it the attack surface, but also the mitigation. So it’s just, I don’t think the industry has caught up. There’s a lot of good pieces written out there, but as far as awareness, that’s the obvious of the challenges.

Opportunities? I always have been a optimist. I think that we are early, and all the copilots and all that are getting more mature, but I think that eventually, there’s going to be a point in time where these AI-assisted tools are going to be good enough. So, that’s not going to produce more risk, but actually helping human write better code. When the day that the self-driving vehicle is going to be good, a day like that, then I think there will be a day for software development too.

Amir Kazemi:

Yeah, absolutely. I think it’s a really good point around the challenges too. There’s explosion of code that has happened, right? I think recently, Jimmy, as you know, there’s this study that we saw that 93 billion lines of code were generated just over the past year, and that’s exponentially growing over time, which is incredible to even think about. So yeah, awesome to hear your perspective. Derek, did you have any thoughts on challenges, opportunities?

Derek Smith:

Yeah. I think there’s a lot of challenges, but there’s a lot of opportunity. I mean, obviously, we can see that AI is going to fundamentally change the way businesses operate and how they go to market and perform their basic operations. It’s a very vast unknown, and even though we’ve probably been doing some form of AI for the last two plus decades, if not a little bit longer in very remedial forms, it’s been maturing at a very fast and growing pace. And if you look at that curve from an AI perspective, we’re entering the point where it’s really starting to inflect and rise at a rate that we haven’t seen, akin to maybe an industrial revolution from that perspective. And I think that’s kind of how we have to think about it is there’s going to be a lot of upheaval and change, and businesses are trying to understand and grapple with what AI means to them and how do they integrate and take advantage of its capabilities where it stands today, but also plan for the future as to the direction of AI and where it goes.

And to your point, we’re seeing this explosion of not only code creation but just content creation in general. And so there’s the two sides of the coin. It’s we want to leverage this collective knowledge, this collective shared material that’s been created over the years out there for really everyone to use. But in the same breath, there’s also material that is very private and protected, and we have to make sure that we’re not taking that code, or content, or whatever it is and reusing it without people’s permission and things of that nature.

And so there’s this balance that we’re trying to understand and figure out, and that’s the challenge that I think most organizations are in right now is understanding where is that kind of equilibrium going to exist and how do they then move forward with not only their own data and their own code, but what other kind of industry or generally available data and code can they also leverage and pull in to help them accelerate and more enhance their developer capabilities, both from a professional standpoint, but also to Jimmy’s point, the citizen developer, the folks that don’t have that formal training or that formal knowledge, but they have a great understanding of the business, or operations, or some other facet that maybe a developer doesn’t, and they provide an enormous insight into where AI could potentially have the biggest impact, right? And that’s our overall workflow and process and being able to peel that back, see where the friction points are, and drop in tools or things that can help make people more efficient, more productive from a business standpoint.

Amir Kazemi:

Yep, yep. I love the idea around the citizen developer. Would love to talk a little bit more about that. But before we get into that, low-code, no-code development is gaining a lot of traction with AI powered dev teams. How do you think teams are just evolving that, the idea around low-code, no-code?

Derek Smith:

I think teams are evolving in the sense that we now know that developers can exist anywhere, and it’s not something that we necessarily need to have from a formal standpoint. But now there are tools and capabilities that will enable, to my earlier point, anybody in the business to take a look at a problem, take a look at a situation, understand where the friction or where the barrier is, and being able to potentially address that challenge and push the organization into a more efficient, more effective mode of operating and solve a real challenge that has benefit to the business.

And I think that’s where low-code, no-code development is going to have its biggest impact within the world that we exist in today is being able to really provide a platform for those folks to be able to solve that challenge without any formal training, right? Being able to tell these platforms, “I see this work process is getting stuck here. Help me develop a solution, and here are my ideas,” in a natural language perspective, and being able for the platform to understand that natural language, build out the code context.

And then of course, certainly, we’ll probably have a professional dev team or two review that to make sure that it’s effective, it’s coherent, we don’t have any bugs or anything from that perspective. But I would say anywhere from 60 to 80% of that code is going to be developed in that platform to address that challenge. And that’s going to be done by somebody who has no formal development training. And that’s a really powerful button to push for an organization. And to think that you could potentially have 100 to thousands of those in your organization, that’s both really exciting but also probably very terrifying in the same breath. So yeah, it’s going to be an interesting journey as more organizations start diving into these platforms and how to effectively use them within their business.

Amir Kazemi:

Yeah, absolutely. And I want to tie that back to the citizen developer topic that you discussed. First of all, what is a citizen developer? I think the audience would love to learn a little bit about that. And how is AI, I guess, enhancing the citizen developer?

Derek Smith:

I will say the term, the first time I ever heard the term citizen developer, and this will showcase how much I go to Microsoft conferences, Satya Nadella mentioned this term at a Microsoft Inspire Conference back in 2018 and specifically around the Power Apps tools, because that’s kind of the direction those tools were heading is how do they help enhance the everyday person, somebody who is not a computer science major person or has not gone to hours upon hours, or weeks upon weeks, or months of Java training, or Python training, or pick a programming language of training?

But the example they used is they brought up a gentleman, I think it was from Safelite, and he was somebody who helped manage how the various glass windshields get shipped around to all the different locations and everything. And he noticed a very inefficient process for how they tracked and inventoried and managed that whole thing. And so he went into Power Apps and built an application with zero training, had no formal development training whatsoever, and built an application that I think 10X’d, if not more, their ability to track, manage and maintain that whole process.

And he technically low-coded himself, I don’t want to say out of a job, but really took 80% of his daily work, just wiped it out with an application, right? But he then became even more valuable to the business because now they had this internal application that netted a real ROI to the business, and that’s the power of that, for the term, a citizen developer. He was somebody who, he knew Excel. That’s it. He didn’t know Java or any other formal programming language training, but he delivered an impact to the business that they’re probably leveraging, if not, they’ve enhanced at this point in time.

Amir Kazemi:

Yeah. I mean, I guess with that definition, I could almost consider myself a citizen developer, is that right, Derek, at this point?

Derek Smith:

Yeah. I mean, at this point, we’ve all, to a certain extent, become citizen developers, right? I think they were more unicorn, more harder to find several years ago, right? This was 2018. Nowadays, anybody can purchase GitHub Copilot, or GitLab Duo, or pick your AI-assisted co-development tool, or jump into a Microsoft platform like Power Apps or whatever, and leverage Copilot, or go on ChatGPT on the web, right? There’s just endless amounts of tools. And so yeah, really, as citizen developers, anybody who has access to the internet, I can go out there, find a site that’ll generate me some form of code to leverage.

I loved last year at a conference that Trace3 hosts, the CTO, Tony Olzak, showcased how these tools could rebuild a classic game. And he did it in a matter of, I think, 20 minutes, where they asked to basically code this 1980s Atari game into an iPhone app that you could leverage and play, right? That’s where these tools are at. We’ve reached that point in a thing where we can take something that probably took a team of developers back then to build now takes somebody with zero formal training 20 minutes to do.

Amir Kazemi:

Yeah, yeah. The amount of innovation that’s happening, how fast it’s happening is incredible. On that note, Jimmy and Derek, we’re in 2024. Where is this going to head in 2023? Where are we going with this? What’s going to happen in… Or 2030, I should say, not 2023.

Jimmy Xu:

So, as you and Derek talked about, the seasoned developer has the bar entry low about refactoring a game in 20 minutes, right? Yeah. Well, in the future, everybody will be developer. I mean, you guys talk about that. But I think that two more points, right? Because I’m talking about the sense of optimism. Although the tech is here, but maturity is probably not here yet, but I think that this could be in the future, in 2030, hopefully by then, if you’re given the amount of improvement and progress in a short time that now, if it’s more of a AI-assisted human-led development. But in the future, hopefully by 2030, the tech is be mature enough that it could be AI-led. You just create an idea, and then AI will spit out code for you. So hopefully, it’s safe in many form or fashion. So, that’s one aspect. I hope that by 2030 it’s just more about ideation versus actually coding. It’s smart learning, one.

The second thing is about, we talk about the speed and velocity, right? So, I want to talk about velocity. Because we can low-code, no-code, citizen developers and AI-assisted, or AI-led, I think that, fundamentally, there will be different form of expectation on velocity, right? Velocity means that what really matters is take an idea and put it into production. If Velocity of getting there faster is going to be the norm, remind me of the dial-up days in ’90s. Now, with the internet speed, I think that’s going to be the future, right? If there’s any kind of interruption, security assurance could be one of them, if today the rate of velocity of software releases is creating frictions where you add security, imagine the future, right? So, it’s just that we all have to prepare ourselves to either change our expectation or change the way we do business so that it doesn’t get affected, the future of the world of you have an idea, in the next hour it could be in production.

Amir Kazemi:

Yep. Derek, any thoughts on your side on where we’re headed in 2030?

Derek Smith:

Yeah. Where we’re headed is unknown territory. I think, quite honestly, we can sit here and pie in the sky of what we think 2030 will be. I think everybody remembers The Jetsons, and that may be I’m dating myself from an age perspective, but The Jetsons was a cartoon set in… What was it, late 2000s? 2020, I think, at the latest. And there were flying cars. We’re not even close to that, but in the ’80s, we thought we would be, right? If you asked somebody in the 1980s if flying cars would be possible in 30 years, most people would’ve told you, “Yeah, we should be there.” And here we are past that, and we’re not.

And so I think that’s the tough thing to really understand is there are a lot of abilities and capabilities that AI is going to help unlock and enable, and how quickly we get there is the unknown. It’s going to take a Herculean effort to, I think, push that. And the difference, I think, compared to this, we’ll call it industrial revolution versus some that we’ve seen in previous age is the amount of technology and the amount of knowledge that we have access to is far greater than what we had in the last one.

And so I think the potential by 2030 is, yeah, there’s going to be a world where I’ve seen people comment that the new job will be the AI kill switch engineer, right? They just push a button to kill the AI. But I think we’ll be at a more happy medium, where we’ve hopefully understood and found a way to really make AI an integral part of our work process, where it truly is collaborating with us and helping us be more efficient and more effective. I’ve heard this term called the 10X developer, where there’s this notion or this promise that a developer can become so skilled with AI that they’ll be as efficient or as effective as 10 developers, right?

And I think that is the hope or the dream by 2030. Where we’ll be is we will have a world, or at least a healthy amount, of these 10X developers where we are that efficient or, I’m trying to think of the right term to use here, that comprehensive in how we code and develop applications that one developer will be as effective as 10 with some sort of AI assistant working with them. And yeah, it sounds scary, but in the same time, it’s going to unlock a whole new world of possibilities of different jobs and different things that we just probably aren’t sure what those will look like today. And I know the unknown is scary, but yeah, that’s where I think we’ll be in 2030 is a world of 10X developers.

Amir Kazemi:

Yeah, yep. I love that perspective. And AI is undoubtedly changing the way developers work. It’s making them more productive. To your point, the 10X developers. But now I want to flip the script here a little bit, talking about emerging security risks from AI. Do you think security teams are going to be approaching these new vulnerabilities and risks in a different way? Are they going to change their approach in terms of trying to secure their, so to speak, threat landscape? How do you think about that?

Derek Smith:

I think the threat landscape is going to expand, right? Not to be mean to another security vendor out there, we’ve started to see this happen more often as AI has been introduced into the world. The speed and the quickness that we’ve expected developers to push code and develop code, especially quality code, has really accelerated, and that’s led to some unfortunate things happening where quality probably wasn’t what it should have been. And we’ve learned some hard lessons. The SolarWinds incident several years ago with the whole software bill of materials. Are we truly checking everything that goes into our code? Well, it found out, quite honestly, as in an industry, we hadn’t been at all. And so there was a big push to really get that bill of materials to truly understand all the different components that were going in to an application. And now, over the last two or three years, we’ve seen major companies, Microsoft included, push out code that was not ready for production because there’s this constant push from business leaders to do it faster, to beat others to the market. And that’s led to some challenges.

And so yeah, we, as an industry, are going to have to figure out a way to balance those scales, especially from a security perspective. Because, thankfully, the CrowdStrike incident wasn’t a breach, right? Nobody was able to get access to those systems. But what if it had, right? That affected everyone globally. Travel shut down, hospitals were in jeopardy, manufacturing lines, other things. Can you imagine if a threat actor was actually behind that? They essentially could hold the world hostage, and that’s a scary reality to contemplate.

And so we, as IT professionals, and obviously working in conjunction with folks across various verticals, have to figure out a way to, I’m not saying slow things down, but build an effective process that, for lack of a better term, hopefully provides enough checks and balances that we limit or have a way to address those types of situations. Because quite honestly, I don’t think there’s a world where these incidents go away, it just won’t exist. But we’ve got to find a way to respond to them and respond to them in a way that can minimize, or at least limit, the impact of code production and how quickly it is moving. And I think that’s what some of these past incidents have really highlighted as a major challenge.

Jimmy Xu:

Yeah. And just remember, I mean, people think differently, but, Derek, AI is here, right? If you’re a security team, you can’t just push back on no AI. It’s going to be here. It’s already here. So, you got to change the mind of how you deal with it. But also, I would say it’s not just the security team’s responsibility. It’s everybody’s responsibility. I feel like I talk about that a lot, early days when I would talking about DevSecOps, when you think about risk AI, like Derek said, in a hospital, it’s not just cyber risks, right? We’re talking about public safety, even fairness, bias. There’s all the scope of security from AI, so it is expanded.

So, it’s really about everybody’s responsibility, and we got to approach it differently. What has not worked in the past will not work in the future. If one thing I learned from doing DevSecOps many days, years is that the way… The same reason the cloud can expose some of the legacy approaches, right? How you do VO management, for example, will not work. How you do VO management on-prem will not work in an AI-centric world. So, I would say that you got to be open-minded to try new things to make sure that you keep up, because anything you do will slow things down. So, how do you be a force multiplier? That will be something to focus on.

Amir Kazemi:

Yeah, yeah, absolutely. One other question I had for you guys is AI is obviously contributing to this explosion of code out there, making developers a lot more productive. But on the flip side, attackers are also going to be using AI against us, right? So, the explosion of code creates a broader threat landscape, but do we have to think about specific vulnerabilities with AI and attackers or adversaries using AI against us? Or how do we think about that? Does that change anything for us in the landscape?

Jimmy Xu:

I’ll take-

Derek Smith:

I think the obvious question is, yeah, we do have to think differently, right, Jimmy? Now there’s a whole ‘nother area of threat landscape. I know OWASP has its own AI vulnerability and threat areas that they have highlighted and targeted, and for lack of a better term, we’ve threat tested Copilot. And there are vulnerabilities there that we found also, right? You can help massage these systems and convince it. And to Jimmy’s point, yes, AI is here, but AI is still, and I’m borrowing another term from our CTO, it’s an omnipotent toddler, right?

Jimmy Xu:

Mm-hmm.

Derek Smith:

Think if your toddler had just infinite knowledge. I know I’d be terrified if my four-year-old or my seven-year-old had that level of knowledge. I wouldn’t know what to do. And that’s kind of where we’re at, is we’re trying to figure out how do we effectively use this? And in the same breath, on the other side, you’ve got a bunch of folks who are looking to just do bad for whatever reason. They’ve got access to the same tools, and they’re going to leverage it. And the unfortunate thing is they’ve probably got a little more time than us to focus and go after it.

And so we really have to, as an industry and as an organization, understand that for lack of a better… The term is you fight fire with fire, right? And that comes from this proverb of, “Hey, if you want to stop a fire in its track, well what stops it? Something that’s already burned, because it can’t spread there.” And that’s been a maximum or a truth that we’ve seen in a lot of different things. And so we’re going to have to leverage AI to help combat a lot of these AI threats, but we need to do so with the understanding and the mindset that none of these tools, or none of these things, are perfect. Humans built them. So, there are going to be vulnerabilities, there are going to be holes.

And I think, ultimately, it’s going to circle back to what I said earlier. It’s about how we respond and how do we contain and deal with the threats when they come? Because, again, they’re going to come, I think it’s on us as the defenders to have a effective and efficient process to respond to those threats and when they arrive to help minimize and mitigate the risk, because there’s no force field that we can erect that’s going to stop. But what we can do is have an effective process for identifying and dealing with them in a very quick and efficient manner to where hopefully little to no harm is done and just constant, continuous learning mindset, right? Always learning, always improving, understanding that there are always things that we can continue to do better. I think that’s the approach we have to take, especially when it comes to the broader landscape now with AI both being a helpful tool, but also something that we have to defend against.

Jimmy Xu:

Yeah. The omnipotent toddler is one of my favorite quotes from Tony. A sense of optimism. Yeah, it’s scary, right? We don’t know. It’s new, we’re learning, but being open-minded is one. But I’m actually happy to see that just the OWASP project, the LLM top 10, in the last 12 months, has been updated three times. So, people are working on them. Really happy to see. There’s a whole MLSecOps community. So, I’m glad that people are forming these groups and they’re contributing to, what, the new unknown? So, I hope to continue to see that. We have to, right?

Amir Kazemi:

Yeah.

Jimmy Xu:

But you talk about attacker AI, right? So, another side of that is, well, that’s going to multiply and quadruple the amount, right? And I recently actually talked at the last conference. I did a talk on VO management. There’s a university study around different GPT, ChatGPT and some other… Those tools that they did. I think pen testing or some kind of benchmark that made GPT4 definitely outperform everybody to explore normal ability. So with all that, innovation always has its place, even for the good folks, right? That’s why you see a lot of vendors have the Me Too syndrome now for AI features, but the idea is here is to say that you got to be innovative. If attackers are amplifying the attacks, then AI can be built for good. So, I think there’s a lot of opportunity for good innovation in AI to fight the scale, which I’m very excited about what Cycode is doing. Let’s hope. Yeah.

Amir Kazemi:

No, that’s awesome. To go one level deeper, but through the lens of, I would say, a practitioner or even a defender, how would you go about building a program to mitigate AI related risks across the SDLC or even just in software development in general? What’s the approach there? What’s the high-level thinking?

Jimmy Xu:

Derek, you want to go first?

Derek Smith:

Yeah. I think in terms of how do you best manage an SDLC and what are some of the things you need to look at, you can’t stop the process, but in the same breath, you need to be checking the progress along the way. And I think that it goes back to a people and a process thing, right? Tools are great, technology is great, but they’re only as good as the people and the processes that are leveraging them. And I think, for lack of a better term, this kind of highlights our love of DevOps, people and process come first.

And you need to have a culture and a process within an organization that understands that security is a foundation or a line that stretches from one end of the SDLC to the other of never-ending, well, not never-ending, but continuous checking of, “Okay, are the code libraries that this application is leveraging, are they free of vulnerabilities? Is there a potential thing that we need to address from this perspective, right? Did somebody accidentally produce a code that surfaces some sort of SQL injection, or cross scripting attack, or something of that nature from that perspective?”

So, it’s integrating security along the entire process of the software delivery life cycle in a way that doesn’t hopefully put a gate at every single point, but it is there, riding along as, for lack of a better term, a copilot, right? I am in my motorcycle of an SDLC, and I have my little sidecar that is my security person that’s looking and saying, “Okay, yep, we’re good, we’re good, we’re good. You’re following the speed limit. You’re obeying the guard rails, the rules. You’re keeping us in between the lines from that perspective.” And that’s really how we have to think about this process is really building something out where security is an integration and part of the overall pipeline of delivery and not a series of gates that we have to check a box and say, “Okay, we cleared gate one. Let’s keep on moving.” No, it is a continuous flow that we’re always going to be involved in.

Amir Kazemi:

I love that, yeah, having that collaboration and then also making sure that you continuously keep the velocity high by not just introducing new and new gates there. So, thank you for that.

Jimmy Xu:

I mean, there’s an old risk and new risk, right? Old risk is that I talked about, that AI amplifies all risk, introducing new risk. Thanks, well said, Derek. But I will say that a good mental three-step reference, and this reminds me of the API security days as well, AI is already here, so if we operate at an assumption, matter of fact probably fact now, that most orgs are doing some kind of AI that people may not even know about, I think step one from a three-step process, understand what you are building, right? So, you got to understand what kind of AI services you’re building. What are you doing? Are you building language models, or are you doing something else? Understand what it is, right? You got to have a visibility around inventory, what you’re doing. What you’re doing, what you have.

And then based on that, you develop a governance model, right? To Derek’s point, you got to maybe integrate it into your software development lifecycle. But what is it? Are you building data lakes, right? Are you building machine learning? If you are, then you got to understand more of a tailored towards MLOps, right? So, this is where it becomes important to understand what you do when you tailor that defense to it.

And then the third step is now that you have governance, now that you have visibility, you have some kind of guardrails, then you can, over time, because remember, we want to make sure that things are continuous and we’re not impacting the velocity or at least have the least impact, then you can look at how do we operationalize and adopt, right? Adopt means that you may have different dev team working different kind of AI projects. Once you’re having the common ground, then you can really tailor that specifically, “Let me tune up a little bit, right?” You’ve got to have the right people and process in play so that when you set a hard gate there, I don’t like to use the word hard gate, but if what you do, it does not introduce friction. But it’s a step-by-step process. I would say it starts with understanding what you have.

Amir Kazemi:

Absolutely. I want to talk about ASPM and how to harness the power of AI in your ASPM program. Would love to get your thoughts on that. Obviously, with the explosion of code, increased threat landscape, there could be tons of blind spots now introduced. So, how does ASPM relate to what’s going on in the world of AI? How can you leverage the power of AI with your ASPM? And would love to get your thoughts on that.

Jimmy Xu:

Yeah. I mean, ASPM, I think, aside from the ongoing confusion to market, it’s a perfect fit for AI because, like I said before, AI means from a risk perspective, AI amplifies existing risk and introduces new risk. When I think about ASPM, obviously the broader scope of AI security is very broad, but if you just apply AI to software development, I think what we’ve learned even before AI with modern application moving to the cloud is that you have to look at the entire SDLC to its entirety. You cannot just look at what’s the code that developers wrote? You got to look at the libraries, you got to look at the supply chain, look at the actual CI/CD system, right? Look at the code repos, look at the secrets, look at the infrastructure configuration that impact, render one of the co-vulnerabilities reachable, attackable, right?

So, I think ASPM fundamentally creates the opportunity to look at a broader lens, look at all that together. So, how that tie into AI is that now that you have tool with you to look at risks holistically, so you can actually look at what you need to mitigate, give the user different stakeholders an ability to say, “Well, we can adjust what we want to focus on and focus on the highest risk based on the time we have.” Remember, the AI also increases velocity of releases and the high expectations. So having that with you, we talked about, hey, what if there’s a time where you need to push something production because this AI will change the world, or change this company, and you only have a limited amount of time? What should you focus on? So that, ASPM created that opportunity.

So really, and last thing I will say is, when I think about this, you got to think about just multiple audiences, right? Two of the most primary audiences, both security and developers, right? So, the good ASPM program and the technology show enhanced productivity for both, right? Developers should look at AI, and potentially, ASPM is what are some of the AI suggested fixes, remediations to make DevOps’ life easier, so don’t hate security. “Oh, it’s easy. Just like one click. Let me create a code for my idea. One click, hopefully, let me fix all my security issues.”

Security team, I mean, we already have a 1,000 to one issue with security experts who understand development. It’s going to get worse. So, we talked about adversaries using AI, right? But no, we need AI for security team to increase their productivity so they can hopefully find multiple needles in huge chunks of haystacks very quickly and they’d be able to… Just like citizen developers, there could be citizen security professionals who can use AI-assisted technology to identify risks very early and just would fight bad AI with good AI.

Amir Kazemi:

Yeah. Thanks for that, Jimmy.

Derek Smith:

Yeah. I mean, to your point, from a citizen security perspective, isn’t that what we try to do, teach our users to identify phishing, and smishing, and all the other fun little things? Because as we all know, the biggest vulnerability any organization has is the person in the mirror and your end users, and they can be your weakest link or your best defense because they know your business inside and out. And if they are empowered, to Jimmy’s point, with good AI tools, especially good security AI tools, and can help the security team identify threats and protect or remediate them before they become an issue, that opens up a huge world of more efficient SOC processes. Imagine the noise reduction teams could experience if your end users were empowered enough to where… Heck, I’ll take 40%, and that’s probably being low, a 40% noise reduction in the alerts that you’re getting. I think your SOC analysts would probably buy everybody a round at the bar and celebrate.

So yeah, again, people process. It’s these maxims of the world. And the better we can arm our citizens, our citizen developers, our citizen security folks, and help them help us, that really puts an organization in a very strong position to not only take advantage of the benefits, but also keep the risks at a minimum for them. And that’s a journey. It’s not a flip a switch and you’re there. It takes time. It takes building that process and getting buy-in from the organization to do that. And those are tough hurdles to overcome, but it’s something that we have to keep pushing and striving for as a community.

Amir Kazemi:

Yep, absolutely. Arm your citizens.

Derek Smith:

Yep.

Amir Kazemi:

All right. I think that’s all the time we’ve got for today. Jimmy and Derek, thank you so much for diving deep with us on the topic. For any information on Cycode’s complete ASPM, or if you want to learn a little bit more about Cycode AI, head over to cycode.com/ai and you can get a ton of resources around how AI is embedded into our ASPM. If you’d like to connect directly with Jimmy or Derek, feel free to reach out to them on LinkedIn. I don’t know if either of you have Twitter, but Twitter is another option as well. But thanks, everyone, for joining us, and have a great rest of your day.

Jimmy Xu:

Thank you.

Derek Smith:

Thank you.

Amir Kazemi:

Thank you.