Code Confidence; Why Building a Complete ASPM Starts with Next-Gen SAST
Everyone in security complains about their SAST - slow scanning speeds, false positives, constant deployment issues. It’s hard to remediate real risks quickly, and impossible to have code confidence. That's why there's never been a better time to cover all the files in your repository with a single integration with a modern SAST as part of your Application Security Posture Management (ASPM).
Join our expert panel to hear James Berthoty, founder of Latio Tech and trusted voice on innovation in the hyper-complex world of AppSec speak with Guillaume Montard, Cycode’s Head of Product and former Founder of Bearer, a next-gen SAST acquired by Cycode.Â
Our speakers will get to grips with the future of SAST, you'll hear the surprising truths behind the 'best in class point solutions' that are really using the same open source scanners, plus an under the hood analysis of next-gen scanning engines that let you triage and remediate as part of an ASPM platform.
In this webinar you'll:
- Discover why scanning speeds matter - legacy SAST is slowing your developers and wasting resources
- Learn how to integrate SAST into a comprehensive ASPM strategy to remediate the risks that matter
- Find out how to scale your SAST and scan without manual interventionÂ
Presented by:
Have questions or
want a custom demo?
Get a personalized demo and learn how you can develop secure software, faster with Cycode.
Speaker 1:
Welcome everyone. I’m very pleased to welcome you to this panel session from Cycode Code Confidence Panel. So today’s topic is why building a Complete ASPM Start with next-Gen SAST. So this is a session that is part of our AppSec Secret series, our monthly webinar. So probably quite a few of you already joined us previously, so we’re happy to see you again and welcome for everyone else that is joining as a first. I’m your host today. I’m Guillaume Montard, I’m the head of product at Cycode. I was formerly the founder and the CEO of Bearer, a modern AI called SAST that was acquired by Cycode a few months ago. I’m excited today to be joined by a very special guest, James Pertotti. James is the founder of Latio Tech. He brings more than a decade of experience, I guess now, in engineering and different security role.
He’s been spending years reviewing different technology, solving security and different business challenges from the real world. So I think we have an amazing host, yes, myself for sure, and an amazing panelist and guest today with James. Latio Tech that James did setup is here to help connect people with the right product based on that experience that he’s bringing for both security and engineers in general. So we do have a lot of knowledge here today for that topic. And I’m going to kick start right away because I think we have a lot to unpack today, James, so we can dive. I guess we can start with probably the first journey in that discussion is where is SAST today in general? I think something that probably a lot of us here knows is that SAST is not new. It’s not like one of those new trendy or category of product. It’s been out there for probably 20 years now. So what is SAST and why is it probably still one of the starting point of securing applications today James?
Speaker 2:
Yeah, I think the evolution of SAST is just an incredibly interesting topic and idea. I think the core of it is definitely when people think of application security, it’s probably the first type of analysis or scanner that comes to mind because it’s definitely the closest related to your code directly. But what makes it so interesting to me is I think that SAST went mid-market or had this big demand explosion around SCA scanning at the same time. And I think that makes it very interesting to think about what is SAST today and what is its role? Because it came about to try to scale the work of application security engineers where instead of trying to manually review thousands of lines of code with a quarterly cycle or whatever, you needed automated testing and ways to do that. And so the demands of a scanner early on were thoroughness and a lot of historical languages, like more classical C++ Java type stuff that enterprises are used using very enterprise focused, a scan could take days to a week because the idea is that it’s automating this big review process.
And I think the rise of this DevSecOps SCA SAST thing really changed instead of how do we scan a monolithic giant repo, how do we get the fastest possible scan that can run in pipeline? Because I think what’s missed in this is that SCA scanning is very, very simple compared to SAST, but we expect them both to happen in the same amount of time. And so it really comes down to people want to scan every code change, but then it gets hard to get the context of the wider application. And so all of that to say that there’s a real trade-off in SAST today, I think, between getting a quick in pipeline, quick and dirty scan of a microservice and then people who want this big enterprise feeling in depth looking at a giant monorepo. And I think that’s what creates this weird tension in the market around historic players who it’s all about brand perception as well.
I know I’m touching on a lot of different topics here, but I think really from what I’ve seen just from testing different SAST tools, it’s really more brand perception than anything where it’s like the historic players have this brand perception of this is a very serious scan because it takes so long and it’s doing such thorough good analysis. And then there’s the faster scans that are seen as maybe less secure, but they’re running faster. And I think the truth is really somewhere in the middle and it has a lot more technical detail than what the brand story would tell you.
Speaker 1:
Yeah, I think this is, it’s interesting the way you put it. SCA give a kick to SAST because SAST is not new like we said. And probably the way to use SAST, of course, at the origin and five, 10 years ago is very different than the way we probably anticipate using SAST today. And SCA is probably a good reason for that, but you still have those two mindsets in the room I guess. And so when we look at those solutions, I think there’s a lot of debate usually when you talk about SAST, everybody has different perception. It’s not like everyone agree, should we do it? Shouldn’t we do it? How should we do it? Is it good? Is it bad? It’s a very heated argument usually when you come into a room and a lot of people have different feedback. You think it’s really necessarily based on that what you mentioned about that enterprise way of seeing SAST and that more modern SCA way of seeing SAST is part of it. Is there more to it?
Speaker 2:
The thing that I keep going to more and more, and you can tell me your opinion on this, is I think there’s really two personas in security and it’s what makes it so hard. There’s the person who takes it very seriously from a compliance regulatory standard who is in depth, wants to be as sure as possible that there is zero vulnerability going into it. And I think that just lends itself to enterprise more because there’s more process driven stuff. It’s not that it’s inherent in enterprises, which is why I think that large tech companies especially tend to have a more rapid thought process that’s more in line with this other persona that’s more… My background was, I was a DevOps guy who got into security and then that security mindset to me, I was just shocked by looking at all these scan results that I saw from all these tools and nothing happened. Quarter after quarter it was like pulling teeth to get a single thing done ever.
And that was my experience, which fuels a lot of my perspective, which is like, look, it’s going to be a lot of false positives. Let’s just get these scans done, let’s get the results to developers, let’s try to actually have movement on these is what I care about much more than this idea of we have the perfect scan. And I think that that’s really the persona struggle that’s happening more than anything inherent to the enterprise versus the mid-market. It’s just that DevOps-ey type people tend to be more at mid-market companies because they’re the ones who are doing more Kubernetes cloud native architectures. They need more of a jack of all trades. Maybe they have less enterprise AppSec experience where you would have someone from a more development background. And so I think that the personas slide into those different companies a little more commonly.
Speaker 1:
Yeah, I can absolutely relate to that. I guess it’s also linked to the maturity of the organization when you come to DevSecOps and we talk a lot about shift left as well, however it’s perceived, but it’s all about what do you do at the end of the day with those scans and can you actually remediate those issues and goes back to the original reasons of why you’re doing it? Is it more compliance? And probably the negative way of seeing it is the checkbox exercise that I think nobody really want to do. But this is probably something that tend to happen more traditionally in some organization where if you have the DevSecOps mindset, it’s more about well give me less, but at least give me something that is good enough that I can act upon right now and basically I can fix it. And, of course, the tool probably should behave and work differently in that case, the results should happen in a different place.
And ultimately, I guess this is the difficulty in the market today that we’ve seen is that depending of where you stand on those two sides, the prime is we end up with a lot of solutions that have been built under one paradigm and try to disguise themselves into another paradigm. And I think it leads back to different challenges that probably you hear also quite a lot regarding SAST. You mentioned a few false positives, but I think there are a few more, and you can touch on that point, but I guess it leads to that and ultimately it’s the right tool for the right job, but the question is what is the right job? And then maybe we can talk about the right tool.
Speaker 2:
Yeah, I think the differentiation between the compliance purposes versus the risk profile of a company doing SAST for pure security reasons does lead as well to the different prioritizations around it where if your AppSec program is primarily being driven by compliance concerns, not that you don’t care about security as well, it’s just that the reason that you’re buying this set of tools is fundamentally that you have some requirement to have combinations of them and you’re just looking for a way to make progress in the most seamless way possible is one approach as opposed to the I run a healthcare company and if anything gets breached here, there are very serious legal consequences that come along with it, is definitely another dimension in what creates the right criteria. But I think this is also, just from my experience with SAST, I was asked to evaluate SAST for the first time before I had done any development work really.
Every DevOps person is always doing light development work, but I wasn’t doing any serious building end-to-end Java services or anything. And when I was evaluating SAST, I think like most security people, it was evaluating entirely on the discovery capabilities of it and trying to trick the scanner and look at all these weird edge cases I’m coming up with and why didn’t it detect that, oh, here’s the limitations. And I’ve really over time seen that testing methodology as a fundamental flaw that we do as an industry, both when we’re assessing tools that we want to buy and it’s a disservice to when we give vendors feedback where when you actually have a tool, you just want real findings to get to the right people so that they can get fixed. But when you’re assessing a tool, you often don’t even fix a single finding. You are purely evaluating the detection capabilities of the tool.
And so vendors typically have to have this on-off switch that we’re telling them to make basically where it’s like, all right, during the demo, let’s turn this thing all the way up so that it detects every possible false positive, true positive, whatever. And then in reality we’ll turn the knob the other way and that’s reachability analysis to, all right, here’s what’s actually a concern. And so I think we have to also change our evaluation criteria with this stuff to make it more about fixing things and seeing progress and less about check out how I tricked this scanner by calling one service from another service and ha-ha, didn’t catch it.
Speaker 1:
Yeah, I guess you talk about false positive in that sense, is what do you want? Do you want to find everything possible or do you want to be able to resolve some of those? I guess that’s the trade-off and the balance that everybody’s looking for. And to your point, the evaluation phase is usually a bit biased in that sense. And I think in the SAST world, unfortunately there is only a few benchmarking project for instance, about the quality of the scanners and it’s quite well known that some vendors definitely super-optimize their scanners on those projects. So in real life, what is it really? So it’s more like the operational aspect that you should be looking for more than actually just the quality or trying to trigger the scanners as you mentioned. It’s interesting to point on that. And today, do you think there’s any big misconception itself when it comes to SAST that you see regularly when talking with customers?
Speaker 2:
Yeah, I think there’s what we talked about earlier, which is the perception that an older enterprise player also has better scanning just because they’re an older enterprise player without any actual evaluation where it’s like, Hey, our ability to pick apart code has gotten a lot better. And honestly, a lot of those older scanning methodologies have difficulties interpreting newer frameworks. I’m sure from the time with Bearer, building scanning engines around frameworks makes it a lot more doable thing. And that’s why even when you see something like, Hey, we support JavaScript now, it’s like, all right, well you’ve just opened Pandora’s box of which JavaScripts do you support? And even within Django there’s so many weird language specific things. I just love this Django example because there’s an image tag that is vulnerable to cross-site scripting only within Django because it sanitizes cross-site scripting everywhere else but not in the image tag.
And that’s the kind of finding that Bob’s SAST tool or enterprise… Most SAST tools aren’t going to find that. And that’s where it gets into the difficult nuances of there are real trade-offs in detection versus the framework specific guidance that you’re getting, which leans into some of this open source trade-offs as well. So big picture misconception-wise. It’s the idea that there’s not trade-offs happening or that you’re just buying this perfect solution. And then I’ll say with that, I still get frustrated at how many people I talk to are looking for these individual tool replacements and don’t realize how the results can inform one another. And Polyfill is a great example of people who were just SCA scanners, were having to manually build searches to look for open source packages that were using the Polyfill domain name, but SAST providers were actually the ones who could just run a simple code search for the text and get to the supply chain security issue faster than an SCA scanner.
And DAST actually was a great solution there as well because then you can just check the front end to see if it’s calling the CDN directly. And so that’s just where it’s like all of these different scanners, no matter the kind of issue when a vulnerability comes up, you need all the capabilities to be able to respond to something. It’s not enough to just say, oh, well I have SAST because it’s the most important, and so that’s what I’m investing in.
Speaker 1:
I see. I fully agree with you. And that’s like the one plus one equals three I guess, and the ultimate value proposition of going beyond point solution and beyond silo. You can also think about it, there’s an interesting example from a customer I was talking about very shortly, they have thousands and thousands of applications. They’re looking for a very, very specific type of vulnerability and asking, should we do SAST or should we do DAST? Well, realistically, you are probably going to have trouble plugging your DAST on thousands and thousands of repo, but you might have probably no false positives in that aspect. So maybe it’s like a combination, it’s like a-
Speaker 2:
But it’s just different false positives is the problem
Speaker 1:
And it’s going to be different. Absolutely. But combining and getting the best out of those two to be able to, well do that mix of going deeper and broader, the two things has a lot of sense. The same example as you did with the purifier actually. So that’s really that thing about going beyond the silo, going beyond the point solution. I think this is one of the change in the market that we’re seeing today. And of course this is something that we can talk a lot about and yeah, go ahead. Go ahead.
Speaker 2:
No, it’s just, I could get, every modern example comes up with this issue. XZ Utils, you need a container scanner for that because it’s open SSH package, but if you wanted to detect if you were importing it somewhere, then you would need something analyzing the code to see if you’re calling SSH stuff anywhere. And the way to detect it upstream would’ve been like an SCA vendor doing upstream malware detection. And so there’s, you need really all of this to try to detect a real world attack. It’s not enough to just say we have some ultimate solution with only one scanner.
Speaker 1:
So is it also, does that goes into the lens of I think something that you mentioned often is in-house point solution are not necessarily best in class or actually not anymore. Can you share more about that?
Speaker 2:
Yeah, it’s a huge, I continue to be running more and more evaluations now that I just went full-time with Latio two months ago. Before that I was at PagerDuty. And now that I have some more time being able to run more and more strict evaluations between tools, there’s a lot that gets caught up in brand perception that’s really has very little to do with the reality of when you run a test. Most of my testing has very similar results between tools and a lot of times there are weird caveats, whether something’s a point solution or part of a platform, maybe it was an acquisition that never really fully got integrated or maybe it did fully get integrated, or maybe they scrapped it and built a new scanning engine from scratch. There are a lot of things that get caught up in the details that lead to real usability issues for users. But overall, what I’d say is that’s why I focus on usability even more than the specific findings.
As long as the specific findings aren’t missing anything egregiously wrong, they’re little bells and whistles that are like, oh cool, this detected somewhere where I’m uploading a file and said, Hey, make sure you check that file for sanitization. That’s a neat little detection thing, but I’m not basing my whole product buying decision on, oh, it caught a file upload issue where I’m not doing any file validation or something.
Speaker 1:
So do you think for the audience and people that are listening and that have been buying and experiencing SAST for many years, is there anything today that you should probably unlearn when you are making the decision to purchase a new solution, a new maybe more modern SAST in a way?
Speaker 2:
Yeah, for people who have been in AppSec longer than me, because I’m pretty new to AppSec, I always want to be careful with, there are a lot of hard lessons learned from older school application security as far as really getting good in-depth results and creating processes to manage it. But from my experience, the way that DevOps has changed both the scale of code deployments and the speed at which it happens, it simply is more important to just keep up and give developers the right information at the right time. That’s the only way to be proactive at this point. If your program is based on we run a cron-job type scan and then get those results to developers after the fact, that is just super time-consuming and difficult.
And I’ve been blessed to only work with DevOps native solutions, so I always forget that there’s this whole other category of tooling that exists that’s the nightmare zone as I think of it, of just manually uploading code, pointing it at SSH repos because there’s no GitHub OAuth to get the app easily installed, manually running Lambdas to import all of your repos manually or via the API if they have one. You’re creating all of this manual process around your tool. And that’s the thing that I run into a lot when I’m talking with people around, people in security are very burned out on tool switching like, oh, there’s a new acronym every month. How am I supposed to keep on top of all of this? Whatever. And the real answer is just every once in a while evaluate a new tool or at least just do a demo of it because you probably don’t realize how much time you’re wasting on all of these processes.
And so something that I ran into a lot in my own evaluations and switching tools is we have this perception of we’ve built all this process around synapse and easy example because there are some bad synapse out there where I’ve inherited a bad synapse. We’ve created a bunch of processes around managing that bad synapse, whether it’s script shipping vulnerabilities all over the place, Excel files flying all over the place, compliance, manual review meetings. And then the thought we have is, oh, if we switch synapse, think of all the work it’s going to be to port those processes over. And the actual answer is, Hey, if you’re using a good synapse, you don’t need those bad processes anymore.
Speaker 1:
Free of charge.
Speaker 2:
Yeah. And that’s what I get at with the SAST stuff is you probably don’t even realize the amount of manual weird work you’re doing that modern products just take away from you. Look, you just click the authorized button in GitHub, all your repos are in there, they’re all getting scanned, they’re all getting webhook scanned, there’s no more pipeline jobs, maintaining images, all of this work you’ve done to maintain infrastructure is automatically taken care of for you by the new solution. And you’re worried about switching solutions because you’re worried about about getting all that other stuff over, but that stuff just goes away.
Speaker 1:
Yeah, I think the biggest example I have of that is a lot of the time customers are asking me, but how often do you scan? I don’t understand that question. It’s not a question about how often I scan, I scan everything all the time. No, but let me ask you again. How often do you scan? Well, the padding and the solution, the mindset completely change and sometimes it’s difficult to actually be on the same page and talk the same language because, of course, when you’ve been using something in a certain way for a long time, it’s difficult to imagine you can do it differently and it can actually solve all those problems probably for you. And there are other things to look for obviously.
But yeah, that’s probably the four solutions that have been out there for many, many years and where realistically you don’t have breakthrough innovation that completely transformed the market. It’s like natural evolution, but it’s still a lot of evolution over the years. It’s still difficult to realize that things can still be called SAST but can actually work and realize a very, very different outcome than something else that is also called SAST and you’ve been using for many years. And I think it’s just difficult for us as an individual when we’re in front of those small iterative changes over the years to imagine how different it can be.
Speaker 2:
And that’s the thing I always also want to be sensitive to is how do you accomplish this at big enterprise scale because that’s a blind spot in my personal work history, is PagerDuty is the biggest company I’ve been a part of and it’s big, but it’s not massive, massive banks scale of company. And I think the fear is a lot of people when they think of program switching, they get in their head, how am I going to create this perfect new… Here’s our new SAST tool and we’ve got new documentation and we’ve got new processes and I’ve coordinated with every PM over every product team, and created a six-month rollout strategy and all of that. But to me that’s like applying that waterfall ideology but to business processes and instead the answer should be, Hey, what if you get a more modern tool like Cycode, you put it into one piece of the organization, most orgs have some innovation group or dev group.
And the alternative is, right now you’re probably just blind to whatever they’re doing because you haven’t gotten the process in place yet with the innovation center or however you’re thinking about it. And the real answer is you should just use that as your testing ground for some of these modern security tools as well so that you do this scale deployment, you figure out what works there as you’re working with those developers on it, and then you slowly do this rollout. And that’s why I think when I define ASPM more broadly, it includes both doing your own scanning to meet that innovation use case, but then also the ability to import those other findings because you’re not telling the company, look, I know you’ve spent 10 years rolling out every SAST scanner under the sun for different reasons, whatever. We will also ingest those results and be able to give feedback to developers for that stuff too over time.
Speaker 1:
Interesting. I like to talk about a topic that we hear more and more, and I think a lot from the mid-markets, but the mid-markets usually also infer some interesting changes in the DevSecOps world, I would say, is open source scanners. I think the mid-market is a very prominent users of open source scanners. What’s your take on those? Do you see any limitations? What did you potentially use also for PagerDuty? Maybe you were qualifying years ago as a mid-market company, I would, say so you might have experienced that as well.
Speaker 2:
One, PagerDuty is a big elixir shop amongst other things, and so there’s not a lot of enterprise elixir scanning options out there. So we definitely had a combination of in-house off the shelf, everything that you could have. And to me, security people are needlessly side-taking or divided on open source where some people will say, oh, that’s just some open source thing and they won’t even look at it. Or my least favorite thing is to just call it a wrapper. So it’s like, oh, this is just a wrapper for ZAP or a wrapper for Semgrep or whatever. It’s this idea that if something’s just a wrapper then it’s bad, and it’s like just focus on the outcome. I don’t care what the scanning engine is, all that matters is are the results good and can I get them to developers to get remediated in time? I don’t care if the scanning engine itself is open source or not.
It’s a much more debatable thing around open sourcing your detection rules for a sock environment where you’re trying to catch attacks. But open sourcing your rules for proactive discovery of misconfigurations is, it’s only really a good thing unless you think your devs are insider threats who are trying to hurt the open source rules that are out there. There’s not really a downside to it being open source, but on the other side the benefit to it not being open source. A lot of security people I think think in terms of the benefit is the rules aren’t out there for people to discover, but I think the actual benefit is that there’s a real developer commitment internal to the company of building a meaningful SAST engine that’s not outsourced or limited based on another proprietary engine.
And so to me it has more to do with a long-term business risk decision than it has to do with any direct consequence and all that to say, that when I’m evaluating tools, it has zero impact if the scanner or the scanning engine is open source or not. The thing I’ll add though to that Django example is sometimes if you’re using specific frameworks, it is beneficial to use the open source scanner specific to that framework because it typically has better results than something that’s trying to be an all-in-one generic scanner.
Speaker 1:
Yeah, it’s difficult to be a great scanners through cobalt to rust, which is sometimes what we’re asked. Without kidding, this is sometimes what we’re asked to be, to your point. But ultimately I think what you’re saying goes back to what you were saying at the beginning. It’s all about the ultimate outcome, the operational aspect of it. It’s not about the trying to trick or the quality of the findings. As long as you reach a threshold that is satisfactory and you don’t miss any of the big things, it’s like how can you really exploit it at the end? So whatever open source, not open source, not really a topic. I’m quite aligned with you here. And now asking you to wear your developer hat. You’ve been experiencing SAST as a developer and still talking to a lot of developers and probably see a lot of things through that lens. What are the specific pains that we’re talking about when we think about SAST for them?
Speaker 2:
Yeah, I think the number one pain is that they’re not built for modern architectures where when SAST looks at the context, typically they only have access to the repo context and they lack the overall application structure. And so that creates a lot of false positives around especially cross-site scripting where it’s like someone could inject JavaScript here, but then they don’t know that upstream you’ve got something else that’s filtering the data before it gets down to that service. And that’s the hard work that a developer who gets that ticket that’s like, Hey, this function’s vulnerable to cross-site scripting. The hardest part of this whole thing that gets under… I remember I did a webinar four or five years ago where the question got asked, is it harder to find things or fix things? And to me it’s like it is so much obviously harder to fix things, but the survey response was 50/50 harder to find versus fix.
And that speaks to the security emphasis on finding, but the developer hat, it is way, way… The hardest part is getting that ticket and then trying to validate is this real or not? And that’s the first thing that a developer is going to do. And I actually don’t think it should be the first. I think that’s the developer training that needs to happen is look, instead of spending all of this time trying to… Because what you would do is spin up a local version of your app, connect in a staging Kubernetes cluster, start running your own custom burp against the service to see if it’s really vulnerable, see if you can pull off a cross-site scripting. That is in depth very difficult security research where you or I are probably doing something wrong along the way. And at the end of it, we probably don’t even really know if we’re vulnerable or not. We just weren’t comically vulnerable as we try to do these very basic pentester things.
And I think instead the tool needs to help people learn, just add this sanitizer. It is so low risk to just add a dot sanitize to that function, just add it and move on. And so to me that’s the biggest developer pain point, is the self-inflicted/ lack of training/ lack of security guidance on what should I do because a lot of security people will dive, I’m tempted to do this where a developer will say, Hey, I got this cross-site scripting finding, can you help me figure out if it’s real or not? And all of a sudden we’re both hopping in and doing [inaudible 00:32:53] stuff.
We love to try to validate findings when the actual patch would take a few seconds if we just added the sanitizer, if we just had a… And I think that’s the heart of this secure by design stuff is not how do we policiers code everything? I’m mixed on that. It’s more like, Hey guys, here’s our solution, the cross-site scripting, just import this library and add this function to it. And that’s the guidance that developers need to make it a simple thing.
Speaker 1:
So ultimately you think they lack sometimes education, they lack probably guidance. And is there any way around that? What have you seen that is successful here that really helped them?
Speaker 2:
I think it’s the guidance.
Speaker 1:
The guidance, okay.
Speaker 2:
Yeah. It’s very-
Speaker 1:
AI is a topic for you in that sense?
Speaker 2:
Sort of. AI can be its own foot gun here because one of the implementations I’m most proud of was we had a custom decorator, not at PagerDuty, another company I was at, we had a custom decorator in spring to mark when something was a public facing GraphQL endpoint. And as part of that decorator, we added automatic sanitization to it. And so that way whenever we ran a SAST scan, we put a custom rule in to exempt that decorator from those findings. And that’s what I mean by guidance is not just… Because so much of the content that’s out there is Secure Code Warrior training stuff, and I haven’t used any of their stuff. So this isn’t like a dig at them or anything, it’s just the kind of content we have is here’s what cross-site scripting is, here’s common ways to prevent it. And that is only half of the solution. A developer needs, okay, for our application, what is our way to defend this attack?
And that’s the foot gun with AI is, because I know several companies doing just this and even they’ve all taken different approaches to how they build those AI responses. Some of them are doing pure LLM stuff on top of other findings. Some of them are doing their own novel detections with contextual remediation. Some of them are saying import our security libraries and just use those. And they all have different pros and cons to them. But it speaks to the true challenge here is the guidance on what do you develop or do for your organization to fix these specific findings that come up.
Speaker 1:
Yeah. Yeah. Some are probably generate the way to fix and sometimes it’s very custom business logic like the [inaudible 00:35:37] that you were mentioning, and this is where usually in the SAST, well you end up into building custom rules, creating that rule set, but that’s tedious. It’s not simple.
Speaker 2:
There’s two things I want to talk about there. The one is the fixes example. I think SQL injection is a good example of this where it’s like if you OWASP look up cheat sheet, how to fix the SQL injection, it’s parameterized queries, but if you’re using an ORM that also automatically takes care of those SQL injections for you. And so it’s like that’s what the guidance is to a developer is, here’s which one to use. And then on, I forget what the other piece was. Oh, the custom rules thing. This is so tough to get over because it’s like on the one hand, the idea of custom rules once you’ve gotten… I think people who aren’t super in this world don’t care about custom rules at all. But then once you’ve started to dig in and you’re flirting with creating your own sanitizer like I’ve done before and stuff, it’s like then all of a sudden those custom rules are looking real juicy to build your own custom scanners for everything.
But then you’re just doing the DIY approach again. It’s like why are you even paying a vendor if you’re just going to make your own giant list of custom rules that’s going to become tech debt, that’s going to become irrelevant, that once you’re onto your fifth new job somewhere, someone’s going to be trying to maintain this giant list of custom rules. And so it is like this, we need to create just very clear, simple, standardized ways to remediate findings.
Speaker 1:
Interesting. So touching on the maybe ROI stuff, if you have any advice, any recommendation for people that would like to actually go on the SAST journey or maybe restart their SAST journey because they tried and it didn’t work out, what kind of recommendation can you give?
Speaker 2:
Yeah, I think a lot of it comes down to believe that better tools are out there. My first SCA and SAST journeys were monumental disasters. I wasted a ton of time implementing a SAST with a company, because I evaluated it purely on how good is the sensor. And I had all the developers, what’s hard about most security people will have developers as part of the vetting process, but we can really lead them astray like I did because I was asking them, are the detections correct or interesting and are they not false positives? So I was having the devs focus only on that and 0% on usability. And so that created the nightmare process where we were having a quarterly review meeting with developer leadership going over our SAST results and figuring out what was a true positive or a false positive and prioritizing. It was such a waste of time for us and for very high level developer leadership that needs to be working on product.
And I’m sitting there wasting their time with this tool that’s false positives and difficult to figure out is this real or not, or difficult to figure out what is the finding really trying to tell me and all of that. And it was only by shifting to a new tool that ran more in pipeline and got the findings to developers quickly, and it was more in the repository context where they’re working that I was able to have more, instead of having to have a quarterly meeting with chief architect, I was able to have quick Slack message exchange with junior developer who’s getting this SAST finding. And that really sped up. It got rid of the bottleneck of chief security architect and chief developer architect having to have big meeting to run through this list to really democratize that approach.
And so that’s why to me, it’s about being willing to just experiment with different tools that are creating different workflows along the way. And I think keeping those small POC deployments is a big part of that, where you’re not trying to create a three-year commitment of this is our SAST tool for ever across the entire organization, but you’re willing to be flexible and try different things out along the way.
Speaker 1:
So even in term of deployment, going through step-by-step, depending on the different maturity of each team, seeing how it goes, I think we sometimes hear customers saying, yeah, I want to block everything. Well maybe don’t go there right away. That’s probably not going to end very well. Let’s do it step by step. Let’s see how your developers are responding to it. Maybe there is a way also to create the wall set because maybe some are not very much working for you. So yeah, it’s not zero one I think what I’m getting from your advice is first think that, okay, it’s still called SAST, but maybe your experience from the past is not the experience you’re going to get today. And so it’s not zero one, not all in, take the time, it’s not that simple. But I would imagine that you would still recommend that should be one of the pillar of their security program even today in 2024.
Speaker 2:
Oh, yeah. I generally, I have to separate future thinking James from present company James, because future me thinks that the ideal security program here is, one, all of my scanning in a single place. Because if for most companies, especially not at the giant massive enterprise scale, your production assets are defined as code and your production code obviously is code. And so you should scan all of that at one place and get results to developers. And then CSPM like cloud scanning is an afterthought because it’s in the context of drift from the IaC. And then you’ve got a totally separate runtime solution that’s only focused on is an attacker exploiting my environment? And so this is a CSPM scenapolis future state that I would love to see happen. But for present, as far as prioritizing an application security program, what’s the starting point for code security? Because the way a lot of early companies getting involved in this stuff will think about it is, all right, I’ve got a cloud security solution and then I need a code security solution.
And SAST is definitely, if you want to do code security at all, step one is simple branch protection rules and GitHub configuration stuff. But then step two is you’ll get a lot more value out of SAST than SCA. And SCA is definitely hotter in a broad sense, because it’s tied into this supply chain idea and it feels cool and fresh, but first of all, you’re getting those results from your container scanning solution and not a lot of people put that together. But container scanning continues to build SCA detections into it more and more. So there’s some caveats there, but from a pure result visibility perspective, there are benefits that are more workflow related to separating the two.
But from a pure visibility, what’s missing from my program perspective, you have a lot of that with your container scanning thing. And so SAST is the big blind spot on the AppSec side of things. Everything else is other places you can find it. But then as you add in to that AppSec scanning solution, the real thing you’re trying to solve is getting developers involved in security. And that’s why I shift a lot of that to ASPM because once the developers get involved with SAST, like, oh, hey, these are the same guys that should also be fixing the container volumes and it should also be fixing the IaC misconfigurations and should also be fixing the SCA package JSON type stuff. And you want to get all those findings to developers with the same things we’ve talked about today where it’s speed and getting it to them early and having it be easy to integrate and maintain, having it be sensible in the code context. That’s why all these things are related, but SAST is definitely the starting point scanner for application security.
Speaker 1:
It’s interesting, I think you often hear customers trying to start with SCA because SCA is, for different reasons, easier I would say. But ultimately what you’re saying is you will get much more value if you actually start the SAST, even though the effort might be a bit higher, the impact will drastically be more important, especially because you are already somehow covered on the SCA side with the container scan.
Speaker 2:
And the thing that I’ll add there is just, look, I’m going from my personal experience and I have never seen a true positive with an SCA scan because there are so many contextual issues in convert… How does a CVE on a transitive dependency convert to actual business risk is one of the hardest problems that’s out there to solve. And that’s why a SAST scanner, a SQL injection is a SQL injection.
Speaker 1:
It’s closer to the risk. It’s easier to assess the risk itself. You talk about ASPM, which gives me an amazing segue to talk about that and especially there is a term that we love at Cycode is talking about complete ASPM. What’s your definition of the complete ASPM?
Speaker 2:
Yeah. This is an article that I continue to have endless debates about and stuff with ASPM, because ASPM was born out of ASOC, which is an orchestration idea of it comes out of enterprise. You had teams that self bought their own scanners, you’ve got all these scanners out there, let’s plug them all into a single dashboard to then orchestrate our visibility across all of our AppSec program. But then when I saw everyone rebranded to ASPM around that idea, because the reality of where these companies were at when that took hold was some people had different scanners at different maturities and they also integrated for some different stuff. And so everyone qualified into that category in some way. And to me, ASPM is just like, it’s much easier to take that category and parallel it to CSPM like when there’s early CSPM, which was a lot the same way, where there’s a lot of different takes and what is it, whatever.
But now like a CSPM, I expect it to have workload scanning and general cloud configuration scanning in the same place at a minimum. That’s what I’m expecting a CSPM to have. That’s a complete CSPM. And so for ASPM, I just made that same idea, but for the application side, and that’s the idea of the complete ASPM is it’s like it is everything you need to do application security, which is all of these different scanners and being able to integrate with other scanners and create workflows and searching. And so that’s the heart of the idea is it’s the one solution for end-to-end application security scanning.
This is a weird caveat to make, but I just feel like I have to make it. I don’t think CSPM should have ever added the workload thing and became CNAP and this big weird monstrosity of a product. And so that’s why for ASPM, I’m okay leaving the runtimey stuff to, runtime application security is still, that’s why I prefer ADR for that, and it’s still a very evolving new thing that people are still figuring out what the heck that is because application security people don’t have a historic ability to do security operations work. There’s just no framework for what do you do when you put a Java developer in a sock. That’s just not something people have done. And so that is still getting figured out, but the ASPM to me is very much end-to-end posture management that gives you everything you need in one place to do application security.
Speaker 1:
And SAST and AST in general is one year of the scanners or is it also more than that in that ASPM model?
Speaker 2:
Yeah, it’s one of the many scanners as far as I look at it as all of the files that are in a repo should be able to get scanned with one thing, because the alternative to that is you’re fricking running five pipelines of, here’s our IEC scanner, our container scanner or SAST scanner or SCA scanner or SDLC scanner or SBOM generator, our image signer. You are doing secrets, DAST, drift detection. Are you running 10 security jobs simultaneously inside your pipeline? That’s the alternative. And that’s why that just doesn’t make sense to me. And I look for the one thing, you shouldn’t need 10 different solutions just to cover all of the files that are very… Every repo at most organizations has a little bit of Terraform, a little bit of pipeline definition, a little bit of Helm chart, a little bit of first party code and some declarations of third party code and a Docker file. And so that is one scanner. It’s not 10 different things-
Speaker 1:
But going to the ASPM value proposition, because one like you mentioned definitely is about consolidating and not having to set up and operationalize like 10 different scanners. But also it goes beyond that. It’s also about prioritization because it’s not about getting 10 times more vulnerabilities findings, it’s also how can the ASPM really build a tool to tell you those are the 10% that you should focus on. And then when you think about that, how do you think about prioritization in the SAST context? You still do prioritization where SAST and point solution have been doing prioritization forever based on the severity of rules, basically, simple as that, or do you see another benefit of SAST inside the ESPM world when it comes to that prioritization aspect?
Speaker 2:
Yeah, this is where, because on the ESPM spectrum of vendors, there are people who are more on the testing side and there are people who are more on the vulnerability management side. And what’s really tough for me to think about is the amount of context that each of those providers has and generally the more context, the better for prioritization and workflows things can get. And that’s what a hard decision is like, how much context is actually necessary or not. So from an ESPM perspective of total code coverage, there are definitely benefits to knowing what pipelines run in this. Is this a public-facing web app? Where in the stack does it sit? Those sorts of things. But as you try to prioritize, you’re going to get closer and closer to runtime context and runtime reachability.
And that’s where there’s a huge question mark still around like does that mean every ASPM provider should have their own agent that can then send runtime details back to the hub or do they integrate to get that data and does that data even matter? Because you can run a lot of static reachability pretty well and try to do false positive elimination based on is this SCA package function called in my code base? Is that SAST or SCA? The line blurs there.
Speaker 1:
Well, yeah, you need the SAST technology to do it. I think this is what we’re seeing in the market today. It’s interesting the way you see it because you can attack that problem from two angles, absolutely. Getting the data and the context from production, which a lot of us are doing, trying to do. And also you can see the more the production provider trying to go into the asset territory for that matter. But we’ve seen that for many, many years. And that shift left going right and right going left so far never really happened for different reasons. It’s not just the technology, it’s a lot of other things. And you can also attack it from the other side is actually, yes, code analysis is difficult, but you can actually do many more magic than what we used to do with reachability analysis, even in the context of SAST, in the context of ACM in that case. And so maybe the answer is both and the market is still get decided about where exactly we get the benefit from.
Speaker 2:
That’s why I don’t hate the idea of another vulnerability management tool that sits outside of this that does some of that because it’s a tough, there are benefits to both that are real, like getting static reachability before something’s deployed, that lets you get relevant results to developers before an application’s deployed. And a lot of times developers don’t care too much about, not that they don’t care, it’s not their job to know what pod does this run in and how does it get to the Kubernetes cluster and blah, blah, blah. That’s data they don’t always care about. But then on the runtime side, that is the source of truth. If you want to really know if a function’s executing or not, you can only check that at runtime. Everything else is theoretical even in the best case. And so that’s why there’s real trade-offs here. And I don’t have a clear, here’s the answer. Besides that both can never hurt.
Speaker 1:
No, fair enough. And I think we’re all working toward adding more of those contexts and that production aspect and also deeper analysis ultimately to always go back to that topic of prioritization and better remediation because things could be a true positive, but ultimately they will get called for whatever reasons. So why focus your effort on remedying those when maybe some that you would imagine are lower severity actually are potentially exploited. So you should focus your effort on that. And that’s really an interesting angle on the ASPM side. I think for our audience that still is thinking about ASPM in general. Can you share a little bit about the impacts on productivity overhead or cost controls that you’ve seen with your customers that adopted ASPM? And what would be your advice here when thinking about ASPM in general?
Speaker 2:
Yeah, I just think it makes everybody’s life easier. Because no developer is going to log into five different tools to check what the heck’s going on with everything. You can’t have a bunch of different pull requests. Responses are cool and all, but you’re not going to have five different PR comments on every single push that are every scanner, checking every individual thing, and then security doesn’t have to maintain the image and deal with all of that stuff anymore. And so the productivity impacts are just massive across the board of you’re getting visibility from one integration to everything. Developers only have to check one tool and learn one. Every vendor is going to tell you, we built it so developers never have to log in. Look at us, it’s all just, they get a PR comment for everything. But at the end of the day, the developer is going to have to learn the tool.
Either they’re going to learn it via weird PR comments that don’t make sense or they’re going to be logging into doing it or some combination of it. And so that’s why they need to just go, oh, Cycode is our security tool. It’s too much to be like, oh, well, we use Cycode for this thing and we use this other tool for this thing, and here’s all of the intricacies of what’s getting scanned by what. It’s just creating a lot of mental overload for them. And so all that to say, developers lives are easier, security lives are easier. It doesn’t solve every problem under the sun because there are still things getting discovered that you have to fix, and nobody likes fixing that, and they need incentives and all that crap, but it is certainly the easiest option to try to get application security done.
Speaker 1:
Well, thanks, James. I think it’s an amazing closing argument. We’re at time, I think. So we’re going to wrap up now. I really wanted to thank you because I think we went quite deep in the topic with quite the detailed discussions and trying to cover different angles from security, from today’s market and developers, from an ROI perspective, from ASPM and [inaudible 00:57:02] as well. So this has been a very interesting discussion, and I hope the audience appreciate it as well. You also have a guest blog post series on our blog, so I invite everybody to check it out and read James’s thoughts on the market and also check out Latio.Tech. Do I pronounce it Latio or Latio.Tech actually?
Speaker 2:
Yep. That’s whatever.
Speaker 1:
Okay. Well, now everybody knows how to spell it at least.
Speaker 2:
Yeah.
Speaker 1:
So thank you, James. Thanks everyone for joining us today, and we’ll see you for our next webinar. Thank you.
Speaker 2:
Thank you. See you.