The Future of Application Security; From AppSec Chaos to Maturity with ASPM
For most security teams, software development presents an unmanageable attack surface with sprawling security tools and alert fatigue making it harder to remediate and reduce risks. Security teams are in AppSec chaos. How do you create mature application security controls, measure risk effectively and get visibility to the critical 1% of vulnerabilities faster. Don't miss our expert speaker - Andy Ellis, Operating Partner at YL Ventures, Advisory CISO at Orca Security and author of 1% Leadership, - as he discusses how to build effective and mature AppSec controls. We’ll also look at the future of application security, with a deep dive on Application Security Posture Management (ASPM) that allows you to bring all vulnerability alerts into a single pane of glass for immediate visibility, and quicker prioritization and remediation.
In this session you'll:
- Understand what elements are driving AppSec chaos
- Get CISO-tested frameworks for building mature AppSec controls
- Discover the role of ASPM in solving visibility, prioritization and remediation
Presented by:
Have questions or
want a custom demo?
Get a personalized demo and learn how you can develop secure software, faster with Cycode.
Amir Kazemi:
Hey, everyone. So welcome, and thanks for joining the AppSec Secrets Webinar Series brought to you by us, Cycode. Cycode. So our approach and philosophy to AppSec is that we think of it as more of like a team sport, and what this series really does is it brings together security, it brings together AppSec, dev, and business leaders like yourselves. We’re essentially trying to bring all of you together to really flip that script on AppSec and discuss best practices and discuss challenges that you’re probably having in the space yourselves as well.
My name’s Amir Kazemi, and I’ll be your host today for this episode, The Future of Application Security From AppSec Chaos to Maturity with ASPM. Super stoked to have security heavyweight, Andy Ellis, with me right here. Andy is a seasoned tech executive with a lot of expertise in the cybersecurity space. He’s been the operating partner at YL Ventures, the Advisory CISO at Orca Security, and he’s also the author of 1% Leadership which you all receive a free copy of at the end of the webinar. Andy previously served as the Chief Security Officer at Akamai Technologies where he was responsible for the company’s cybersecurity strategy over a 20-year tenure. But in general, just honestly cannot think of a better person to talk to today other than Andy regarding this topic. Yeah. Andy, did I cover everything regarding your background?
Andy Ellis:
Yeah, I think you covered the high points. I mean, we could go into the nitty-gritty details, but at some point, we just run out of time for the webinar.
Amir Kazemi:
For sure. Yeah. Let’s dive right into it. I wanted to start it a little bit broader just to kick it off. Let’s talk a little bit about the attack surface or the unmanageable attack surface. So how do you think that has evolved over time? I think it would be good to level set on what is an attack surface to even begin with.
Andy Ellis:
Yeah. So I think when people talk about the attack surface, we immediately jump to like, “Where are the adversary touch points?”
Amir Kazemi:
Yeah.
Andy Ellis:
Even before I jump into that, I had to think about where is the center of gravity of our organization because that drives an attack surface conversation. That might sound a little strange, but when we think about applications… Let’s just go back for a little bit like pre-widespread-internet, like applications where it’s… Oh, it’s a downloaded software, and so your attack surface was the piece of software that you’ve shipped, and you didn’t really think about what was behind it, all the development infrastructure because there was almost none of that. Maybe you started to worry a little about how you delivered software updates became a touch point, but over time, we’ve moved from that, obviously, to an internet-centric model, to now the internet is the business plane.
But as we’ve thought about the attack surface for applications, we’re only starting to tackle pieces of what I think of as the true center of gravity which is the entire software development life cycle. We’ve talked about SDLC security, but in the history, we never really talked about the front end of applications as part of the SDLC, and I actually think that they’re inseparable, and so we’ve… Look, I’m to blame because we built WAFs into CDNs to say, “Oh, look, we’ll protect the front end of your applications,” and as a result, people are like, “Well, that’s not really part of AppSec.” We’re like, “Okay. We’re dealing with that. We’ll do virtual patching,” and it let us deal with things or not deal with things. Then, SBOMs came around, and people are like, “Oh, we just need to know what’s the software.”
I think all of these become this touch point for the entire SDLC as our attack surface because the adversaries aren’t just trying to break our running applications, they’re trying to seize control of our applications. They’re trying to get access to the data, and so you have to think about the whole life cycle of building an application is actually where your attack surface starts.
Amir Kazemi:
Yeah. Gotcha. Let’s say you’re a new CISO. How do you think about or even collect the data or the inventory around that attack surface? Can you aggregate that on a spreadsheet? Do you use a specific tool? How do you think about that?
Andy Ellis:
Yeah. So I’m a huge fan of the simple spreadsheet model which is you always start with a spreadsheet as a new CISO, but your spreadsheet is never going to become detailed. As soon as you need detail, you need to have it in some other tool, but you use the spreadsheet to keep track of the stuff that’s not in any other tool. So you might write down like, “My SDLC,” and you should write this question like, “How many systems,” however you want to define system, “are in my SDLC?” As you learn that, you might say, “Oh, here’s all the things I have to start tracking. I’ve got to keep track of every source code repository, and oh my God, there’s a lot of them. All of my build systems, and all of my developer desktops, and all of these things are part of my SDLC, and Slack is part of my SDLC.” How many people actually cognitively think about your messaging system is part of your SDLC?
Amir Kazemi:
Yeah.
Andy Ellis:
But if I can tell a developer to accept a pull request via Slack, that’s SDLC for you.
Amir Kazemi:
Exactly. Yeah. A lot of the times, that goes unknown, right? That’s not even covered or people aren’t thinking about that.
Andy Ellis:
It’s really not even covered, and then you’re going to want to start to think about the outcomes as well.
Amir Kazemi:
Yeah.
Andy Ellis:
Right? Like, “Okay. Why am I measuring this? What’s the hazard? What’s the risk? What am I actually trying to do?”
Amir Kazemi:
Yeah. So how would you say that this evolution has created this thinking around building your security programs? How are you thinking about building these security programs knowing the evolution of this attack surface?
Andy Ellis:
So I think one thing that people do get a benefit from is there’s been this slide that’s been shared for 30 years about the cost of fixing a bug in the waterfall development model that says like, “Increases by 10X the later and later you get into it.” My favorite thing is there’s no actual study behind it. It was literally a thought paper that somebody put out, but it resonates with us, and it feels kind of appropriate in a waterfall world.
In a non-waterfall world, it really no longer does because you have to say, “Look, fixing bugs is not really that expensive if you truly are agile. It’s not fixing bugs is what’s really expensive. So how are you detecting and cleaning? What does remediation look like? How is this entirely part of your life cycle?” Because I think if your goal is to say, “We’ll never deploy a software that has a defect in it,” then you’re setting yourself up for failure.
Too many organizations, I think, have that as an implicit assumption, but I think you need to start to have this explicit thing which is we need to be able to find defects anywhere from ideation to deployment, and how do we quickly detect, fix, prevent them from going out, especially if we’ve already fixed them once. I think as a CISO, that’s always the most embarrassing thing is when you do some remediation campaign, you clean something up, and then a new piece of software came off of a different branch and reintroduces a vulnerability.
Amir Kazemi:
Yeah, yeah. What about maturity, or how are you measuring the effectiveness of these security programs?
Andy Ellis:
Yeah. So measurement is hard, and most people focus on measurements of activity rather than measurements of effectiveness because it’s really easy to say, “Well, how many bugs did we fix?” Then, you get into the, I think, the Dilbert cartoon of, “Well, if you measure people on how many bugs they fix, you’re incentivizing them to introduce more bugs.” which might not be that they intentionally, deliberately write bad code, but they’re like, “Oh, I could take this one defect and write it up as five bugs, so I fixed five defects.”
Amir Kazemi:
Yeah.
Andy Ellis:
Right? I think the real question has to be like, “What do you think is effectiveness?” The effectiveness has to be things like, “Even if breached, our application doesn’t reveal X.” So some of it goes in the design. How many applications have too much data accessible at the end? So there’s a designer like, “How do we remove secrets?” How many times have we heard the, “Oh, somebody posted a piece of code on GitHub as an example of their work, but it had keys in it?”
Amir Kazemi:
Mm-hmm.
Andy Ellis:
Right? That demonstrates an ineffective security program.
Amir Kazemi:
Yeah.
Andy Ellis:
So I think that’s almost what you want to start looking for is say, “I’ll take keys as this example and say…” or any secret, right? Any secret that is in your code base is a weakness waiting to be exploited, right? It’s a hazard that this secret can get out. So one measure of effectiveness becomes like, “How many secrets have you gotten rid of? How many secrets are protected by whatever your vaulting solution is? Do you move them into a vault, you make them accessible only via API, not just written into the code?” The more you do that, the more comfort you feel that you’ve implemented an effective control. So think about your effective controls, and then track implementation, not the activity to keep playing whac-a-mole.
Amir Kazemi:
Yeah, yeah. Are you thinking about that measurement a little bit differently across the CISO level, the AppSec level, and the individual level as well?
Andy Ellis:
Yeah. Yeah, I think at the CISO level, you basically want to say, “Look, we have some set of standards for our code base, and what does adoption look like at the high level?” Right?
Amir Kazemi:
Sure.
Andy Ellis:
“What percentage of these standards that are meaningful have we vetted to show they would work, and what parts of our SDLC actually implement those in a way that we’re comfortable with?” Right? Maybe you track that by business unit. Maybe say, “Oh, look, here’s the new thing we’re going to roll out and implement.” So the number starts out low, and it’s going to grow. But as a CISO, you want one slide about AppSec, and even that’s almost too much. Right?
Amir Kazemi:
Yeah.
Andy Ellis:
You want to be able to just summarize and say, “End to end, here’s what my AppSec program looks like. Here’s the high-level 12 principles we have, and here’s what adoption is.” But if I’m a developer, I need to go into detail, right? If I’m the AppSec engineer, I’ve got to be able to go in and say like, “Here’s specific problems. Here’s all the code bases with issues. What are we going to fix? What do we need to write new?” rather than just like, “It’s not a bug to be fixed if it’s an architectural flaw.” That’s the thing a lot of people miss in the AppSec space is sometimes these are defects in how we wrote the software, not in what got written.
Amir Kazemi:
Yeah.
Andy Ellis:
If you’ve written secrets into code, there’s not a bug fix here. You need something to manage secrets. That’s new capability.
Amir Kazemi:
Yeah. Yeah. Exactly. You need something to manage secrets, but also, it could also be a cultural thing as well, right?
Andy Ellis:
Right. Yeah, and I’m a big fan of looking at, “Organizationally, where do you have common problems so that you can tackle them?” I recall something one of my AppSec manager did at Akamai a long time ago. We went and did a web app analysis, the very standard hire a third-party to come in, and literally, they’re just doing manual fuzzing, and they find a million SQL injections and all these problems. We know what the right solution is, which is you need to write an input sanitization library and just run everything through the sanitizer. Right?
Amir Kazemi:
Yeah.
Andy Ellis:
But there’s no way that the engineers would’ve accepted that if we had set that upfront, and I can’t say that as the CISO because everybody would listen to me and be like, “Oh my god, Andy, you’re so negative. Why do you not believe this would happen?” I can’t say, “Well, I have a lot of experience not only with this engineering team, but a lot of them in general.”
Amir Kazemi:
Yeah.
Andy Ellis:
So what our AppSec engineer did was said, “Okay. Well, I’m going to go give them one to fix.” Right? They went, and they did the obvious like, “Oh, here’s the string that was the exploit.” So they literally hard-coded into their code like that string, and then as soon as they fixed it, he walked back in with the next one. He was like, “Oh, here’s the obvious bypass that I already had planned for what you did,” and then the next month, brought them five things that were similar. He kept doing this until they said, “We’re tired of playing whac-a-mole. How should we solve this?” At which point, now we had a conversation, they wrote input sanitization, and now we brought them and said, “Oh, by the way, you’ve written this great library, you actually have 50 different apps that need to use the library.”
So it was this campaign that maybe took us longer than it would’ve if we believed everybody would’ve done what we wanted right upfront, but they now believed it was their solution. I think it was done faster than it would’ve been had we just hammered them with like, “Here’s a thousand findings.” So my engineer needed to track that. They needed access to every one of these defects. I did not. That’s a really important thing to understand is at different levels, you need different operational visibility.
Amir Kazemi:
Gotcha. Yeah, yeah. No, that’s super important. You touched on secrets a little bit earlier. We also touched on the attack surface. What about security controls in general? How do you know which security controls to put in place, given you may know what the attack surface is? Right? Where do you start? Where do you start, and how do you think about that?
Andy Ellis:
So I like drawing on Nancy Leveson’s work, and I think we’re probably a little too tight in for me to point out the book right… No, it’s right there. It’s that blue one that I’m just pointing out here. It’s called Engineering a Safer World. Actually, it’s a safety engineering program, not a security engineering program, but there’s a lot of parallels. Very simply, you don’t have to go buy the book if you don’t want to. What you can apply is first, talk about unacceptable losses. What is the bad outcome that could happen? So you look at your AppSec space, and you say like, “What’s the worst outcome?” Don’t think about how yet, right? But you say, “Okay. We publish all of our customer data that’s exposed.” Right?
Amir Kazemi:
Mm-hmm.
Andy Ellis:
That’s unacceptable loss. Okay. We all agree on that. Now, you can start to talk about the hazards that lead to it. You say, “Well, inside our system, our application has access to all of the customer data all at once. The application has the ability to pull everything from the database.” That’s just a hazard, and, “Oh, look, the administrators can have the application.” You connect these hazards. Then, at some point, you’re like, “Oh, now, I can talk about a scenario. What if an adversary compromises an administrator credential, connects into the application, and dumps the table, and walks away with it?” Does that feel plausible? Absolutely. Basically, every breach ever sounds something like that.
Now, you can say, “Okay. What would be the controls that would protect against this? I’ve got this story, simple narrative. It’s almost like telling a fairy tale.” I’m like, “It’s little Red Robin Hood. You’re just reversing it and saying, ‘What would we do to stop it?'” It’s like, “Oh, maybe one thing we want to do is not have the application actually able to look at the entire customer database at once. There’s no reason that the same application our customers use actually can pull all that data. Right? It should be stored queries. It should only be able to pull up one customer record at a time.”
Make it a lot harder so you don’t have these accidental breaches. You say, “Okay. On the front end, maybe it’s about implementing multifactor authentication for my administrators, or maybe it’s giving my administrators a whole different way to connect and access this data.” Right? So you just tell these narratives, and once you have the narrative, the controls really pop out at you. My favorite ones are always multifactor authentication, eliminate as much as you can, administrator credential bloat. Right?
Amir Kazemi:
Yeah.
Andy Ellis:
If there’s basically one administrator who has access to every system in your application or worse, every laptop in your environment because then you get access to every user, those two things are basically what will cripple you. Then, you can say, “Okay. Let’s look at… Where is my data? How do I isolate my data?” You just build on top of that.
Amir Kazemi:
Yeah. Yeah. I like that frame of thinking where you talk about… or you start off with, “What are the unacceptable outcomes for the business,” right, “or the program?”
Andy Ellis:
Right.
Amir Kazemi:
Then, that’s how you kick it off. Then, do you use that or that frame of thinking to map to the perimeter or the attack surface that you found?
Andy Ellis:
So I think you don’t do it directly, but you’ll do it indirectly.
Amir Kazemi:
Indirectly? Okay.
Andy Ellis:
Because you’re trying to build… What is the sequence? What’s the fairytale about what an attacker could do to exploit hazards from the outside? Somebody who used to work for me called it adversary powers. Right?
Amir Kazemi:
Yeah.
Andy Ellis:
So, first, assume every adversary has the power to connect to the internet, to send the email, and to run Metasploit.
Amir Kazemi:
Mm-hmm.
Andy Ellis:
Okay. With those three powers, what can they do to you to get another power? Like, “Oh, they can send email that contains a malicious payload that somebody might click on,” and it’s like, “Okay. Well, if I’m subject to that vulnerability like, ‘Oh, if you click on a payload, you get access to X,’ great, now the adversary can upscale their power, and at some point, they have the power to do a negative thing to me.”
Amir Kazemi:
Yeah.
Andy Ellis:
But you’re always aimed at those unacceptable losses, and the reason that’s important to think about what the unacceptable losses are is that they don’t match to your assets.
Amir Kazemi:
Gotcha.
Andy Ellis:
If you start from an asset base, think about losing assets, and think about the airplane, the airline industry, right?
Amir Kazemi:
Yeah.
Andy Ellis:
You can list all of the airline’s assets. Passenger’s lives are not an asset of the airline. You would never write them down as an asset. But when you think about unacceptable losses, right at the top, killing your passengers.
Amir Kazemi:
Yeah.
Andy Ellis:
Right?
Amir Kazemi:
Of course.
Andy Ellis:
So, now, as you think about how would you protect an airline, you want to think about, “Well, what are all of the ways that passengers could die in the care of an airline, and do you have controls that would minimize those hazards?”
Amir Kazemi:
Mm-hmm. Yep. I love that. I love that frame of thinking. Once you establish those controls, how do you get to a point of like, “Okay. Now, I have executive trust, and it’s almost on autopilot in a way.” Right?
Andy Ellis:
So I think it’s pretty rare for people to get to a trusted autopilot. It’s often like people don’t want to pay attention, and when you think about AppSec, because companies don’t tend to think about the SDLC from end to end as being that center of gravity, AppSec is all too often just thought of as a point problem like, “Oh, we need to do better code reviews,” “Oh, we need to do better secrets management,” “Oh, we need to do each of these individual things,” but we’re not actually thinking holistically. The more things you have to deal with, the more likely it is that your executives are going to turn a blind eye to you. It’s like, “Oh, we’re doing 17 things around AppSec.” “Well, how many things should you do?” is not actually their next question. They assume if you’ve got 17 programs, that this is well taken care of because you would not be investing in 17 programs otherwise.
Amir Kazemi:
Yeah. So I think it’s almost inevitable. You tell me. But as you build out more and more controls, you’re essentially introducing tools for all or there’s too many tools across a program.
Andy Ellis:
Absolutely.
Amir Kazemi:
How are CISOs today or even AppSec leaders thinking about managing that or balancing that, that thought or that program in that respect?
Andy Ellis:
So I think as you’re looking at that like, “Oh, I’ve got so many things,” is the question always has to be like, “What is buying you good defenses? What is effective, and what does run on its own?” You did talk about, “Oh, do you have something that everybody feels comfortable with?” Right? I’m a firm believer that the more you can get the developers to invest in, the better off you are. So I prefer developer self-service on AppSec tools way more than having the security team run them.
Amir Kazemi:
Yeah.
Andy Ellis:
So it’s like, “Oh, if you get the developers to buy in, some of the best…” When I was at Akamai, we had a huge set of things around source code security that were actually all built by developers. I would often consult with the developers, but they were the ones that were like, “Oh, we need to have authenticated check-ins.” This is back in 2000. Authenticated check-ins were barely a thing. The most common source code repository on the market did not support them. It literally was clear text. Anybody could check in and claim who they are, and our developers were the ones who were like, “No, no, no. We’re going to wrap this in an SSH tunnel so we can see who actually did the check-in,” and they built this whole thing to do it. Now, I don’t have to maintain it. It’s their solution. So I get to think about it as part of my AppSec program, but it’s not a tool I’m in charge of. I want to run platforms as a CISO. I don’t want to run point solutions.
Amir Kazemi:
Yeah, yeah. Well, I mean, and that’s an interesting point around that collaboration between dev and security. Oftentimes, there’s tension, right? So how do you not just minimize that tension, but build a better relationship between the two orgs, especially as a new CISO, a new leader coming into a new role? What are your thoughts on that?
Andy Ellis:
So I think there’s a lot of different ways, but the first is to recognize that if you need somebody else to build and implement a solution, then they need to believe in the problem.
Amir Kazemi:
Yeah.
Andy Ellis:
You can’t just lecture them your way to get them to believe in it. I love doing the… If it takes me 10 arguments to convince you, well, can I do it in eight? Not that it’s more efficient, but I leave the last two for you because it’s like your brain is like, “Oh, I’ve got this, and this, and this, this, this, and oh, and this, and this,” you believe it way more than if I told you the whole time. So the more I can get people to finish that argument and do that education and awareness.
Then, on the flip side, the more it’s clear that I understand what’s going on on the developer side. Often, you have a developer productivity or developer tools team that owns the SDLC infrastructure. They’re going to be your biggest partner as a security professional and if you can find things that would improve security and make their life easier or match with their goals. Here’s a simple one which is if you’re a company who’s been around for a while, you probably have a lot of legacy infrastructure in your SDLC. Go talk to your dev team and say, “Hey, what is the actual latency for rolling out software? If we need to make a major change like OpenSSL just drops a new issue, what is our wall clock latency, and what is our cost to get there?” You’ll often be shocked at the answer.
I actually looked at this when I was at Akamai. It’s like we actually couldn’t roll out software without massive expense that had to go to our whole network. In fact, when it was at its worst, fixed before I left, but at its worst, it was literally, we would disrupt one product release. When I say by disrupt one product release, you would only get so many product releases a year, and one of them would get taken to deal with the vulnerability. At that point, the developer teams hated me. If I walked in and said, “Oh, we have to fix this,” they’re like, “No. You just destroyed an entire product release to do that. Everything has now slipped by however many weeks.”
So we went, and we championed this set of programs that the developer productivity team had been trying to get prioritized. They were all about CICD efficiency. They wanted to go in, and it’s like, “Okay. We’re going to bring down release time by this much.” All of a sudden, the CISO has been championing release productivity and release efficiency. When somebody said, “Why are you doing this?” I said, “Well, just do the math on… Right now, when I say we need to fix something, I have to spend three weeks fighting the whole company to convince you. Not because you don’t want to fix stuff, but because the cost is so high. If I can bring that cost down, you won’t fight me. You believe it needs to be fixed, and now you’ll just go do it.”
Amir Kazemi:
Yeah. It’s also you’re putting yourselves in their shoes, right?
Andy Ellis:
Right.
Amir Kazemi:
Trying to help them improve the developer experience overall, so.
Andy Ellis:
Yeah, and one of my favorite things is like if you’re in the same meetings with different people and you hear what their common critiques are either of your request or other people’s, next time you’re in a meeting, if you know what they’re going to say, what they’re going to object to, say it for them. Like if you’ve got a… One of my best partners, she was responsible at the time for professional services like, “Oh, we need to fix this thing in how our application works, but it’s going to require our customers to all make a change.” Right?
The first time we did it, she’s like, “Well, here’s what the cost will be, every professional services person interacting with every customer, boom.” Then, going forward, I would just always say, “Oh, have we thought about the impact on professional services for this proposal?” I now have an ally who’s like, “Oh, you see me. You know what my pain is. Even if you’re not saying that’s too much, you’re at least asking the question for me.” Then, I would know. If I wasn’t in the room, she’d be like, “Well, is this secure? Does this meet our security requirements?” because I was speaking the language of someone else. So the more you can speak the language of the developer like, “What is their actual pain point?” the more likely you can drive an AppSec program that they will appreciate.
Amir Kazemi:
I love that. Yeah. I love that. Coming back to this security controls topic, obviously, tool sprawl is a thing.
Andy Ellis:
Yep.
Amir Kazemi:
How do you think that that has affected visibility for security teams in general?
Andy Ellis:
So I think a big challenge is if you’re a security team, going and interacting with different tools is a pain. So some teams, if they’re really big, you have the team that manages the tool, and they’re each prioritizing their own thing. But if you’re not looking at that in an integrated way, what will happen is you have your person who’s looking at the pen testing results, and they come to a development team and say, “Here’s all of my findings from this pen test. Go fix them.” But they’re walking in the day after the compliance team showed up and said, “Here’s all of our findings about how you’re not satisfying the processes we wrote down for SOC 2, right?
Amir Kazemi:
Yeah.
Andy Ellis:
Then, your SaaS team comes in. It’s like, “Here’s the application security, all the defects we found looking at your code.” So you start working across purposes when you have this sort of sprawl. Honestly, most tools suck at helping developers read them and decide what to do.
Amir Kazemi:
Yeah.
Andy Ellis:
We use this phrase, “alert fatigue,” a lot in the industry. First, actually, I want to say the fatigue is real. I hate the fact that we call it alerts. I came out of an ops background. “Alert” means drop what you’re doing and go solve this. It’s like an alert is you have been breached, something is down, incident. You ran a source code analysis tool and found that I did a thing that maybe I shouldn’t do. It’s not an alert. Yet, that’s what most tools are doing is they’re giving you hygiene findings, and they’re important hygiene findings.
I don’t want to minimize all of them, but I do recall I was told once… I had a system that did a port scan and vulnerability analysis, and said we needed to turn off ICMP timestamp replies, which I’m actually in favor of doing. “I don’t need to do this. Why am I doing the processing at all?” But the argument was somebody would know what time was on my servers. I’m like, “Have you not heard of NTP? Pretty sure he knows what time is running on all of my servers because I have an NTP constellation that works.” In the early parts of my career, and I see people repeat this, you get this report from your tool, and you hand it to a developer and say, “Fix everything.” They see a thousand things, and they’re like, “No. Please prioritize…” They will argue with you for months over whose job it is to prioritize the list of things.
Amir Kazemi:
Between security and dev?
Andy Ellis:
Between security and dev.
Amir Kazemi:
Yeah.
Andy Ellis:
Now, nobody is actually fixing. Think about how much time we waste arguing about whether this should be done before that, and now it’s a thousand things. It’s worth that argument if you’re the developer, but in the meantime, nothing is actually getting fixed.
Amir Kazemi:
Yeah. That brings up the topic of prioritization. Who owns prioritization? Who owns prioritization? How do you actually narrow down to the right critical alerts when you have all this tools for all, when you have this alert fatigue, right?
Andy Ellis:
Right.
Amir Kazemi:
There’s this constant mess which we call “AppSec chaos.” So how do you focus on the right things?
Andy Ellis:
Yes. So, ultimately, I actually think that the dev team owns prioritization. I know any developer who’s listening is going to be like, “Well, you’re just saying that because you’re InfoSec.” But I actually do mean that, that I think they own it, but I think they should also own the tools. The problem is we bring the tools, and since they own the prioritization, there’s no incentive on us to fix the prioritization that the tools are providing.
So to solve this, the tools need to have the context. They need to help and say, “Look, here are the things you should go fix right now. You have five minutes. Go fix this thing. If you’re building a tool, that should be the number one thing in your dashboard is like you know what level someone is in the organization. Assume they only have five minutes, what should they do with it?” If it’s the CISO, it’s like, “Who do you call? I have five minutes. My single greatest power is the ability to pick up the phone.” So if you said, “You need to pick up the phone. Call this engineering manager and say, ‘Your team hasn’t fixed anything in six months,'” that’s the most effective thing I can do with five minutes of my time.
Amir Kazemi:
Yeah.
Andy Ellis:
Right? Arm me for that phone call.
Amir Kazemi:
Yeah.
Andy Ellis:
But if I’m a developer who pops in, it’s like, “You have five minutes.” It’s like, “Hey, we have an input sanitization library that you are not using. Here’s how to integrate it. Here’s our recommendation and best practice,” or, “We’re doing tokenization. Here’s our thing,” but what is that five minutes? Then, the next thing is like, “What does my path look like?” You have to sell people on a vision, not a treadmill because if I say, “Look, what I need from you every day is fix one bug,” you’re like, “I’m not really excited by that.” But if I come to you, and I say, “Look, here’s what we’re going to do. We are going to have an application that’s going to look like this from a security and safety perspective. People will trust it with their data,” you can get on board with that when I say, “We’re going to be implementing a vault for all PII. We’re going to be removing all secrets from code. Here’s the list of things we’re doing.”
You as the developer, you can have some say in how those get rolled out to you. When do you want each one maybe, or do you want to roll them out to all of your apps at once, or do you want to pick an app?” In fact, on the pick-an-app point, this is the thing I feel like I’ve been fighting with every, every security person my entire career which is if you have more than one application in your environment, which I’m sure almost everybody does, and you’re like, “Here’s this critical problem that we want to go fix across all of our applications,” you should never try to roll it out to your flagship applications first.
I know that’s where all your data is. You want to go protect it, but everybody will fight you. It’s like, “No, no. Go to marketing and be like, ‘Hey, you’ve got this web app. We’re going to roll out this privacy vault code for you,’ and they’re going to be like, ‘I don’t have any privacy data.'” We’re like, “Great.” It’s like, “We’re going to roll it out, it should have no impact on you, but we just want to make sure it works on our platform, everybody is comfortable with it,” or let them pick. You’re like, “I don’t care. Roll it out to some set of applications. At some point, we’ll go after the flagship, but the flagship should never be first.”
Amir Kazemi:
In terms of remediation, where are the developers expecting? So prioritization is done, for example, right?
Andy Ellis:
Yep.
Amir Kazemi:
You’re getting down to the right critical alerts, but what about remediation? How do you get… 10,000-person organization, how are you getting the right alerts to the right people in the most efficient manner? I think that’s a hot topic right now.
Andy Ellis:
Oh, it absolutely is, and so many organizations, it’s just the institutional memory of some program managers inside the security team. They’re like, “Oh, I know that this application is owned by Amir, and so I’ll call Amir and say, ‘Hey, I got some stuff for you.'” We need to get to programmatic remediation where we can identify a problem, we know that the… like, “Here’s what needs to be fixed.” We know who needs to fix it. We can give them like, “Here is your dashboard. So, Amir, here’s all of the problems in your environment.” You don’t need a security person to collect that data for you. It’s just automatically all there.
That’s step one, and then we say, “Okay. Now, you need to start remediating, so how can we make it that you’re doing the same remediations at once?” So it’s like, “Oh, we’re going to do secrets removal,” or give you the choice to be like, “Here’s 12 different problems across the 10 apps you’re responsible for. You got to fix 120 things. Do you want to fix one problem across all 10 apps, or do you want to fix all 12 problems in one app?” That should be your choice, but right now, the humans that interact with you decide for you, except what they really decide is like, “What’s the direct ordering of the 120?” So it’s almost the least efficient way to remediate.
Amir Kazemi:
Yeah. Got it. There’s an emerging trend, and obviously, this acronym called ASPM.
Andy Ellis:
Yep.
Amir Kazemi:
Application Security Posture Management. I feel like this kind of category has evolved over time.
Andy Ellis:
Very much so.
Amir Kazemi:
Yeah. So I’m curious of your thoughts on… The crux of it is it’s all about visibility, prioritization, remediation. How are you seeing CISOs evolve their practices and their programs to implement and adopt platforms like ASPM tools?
Andy Ellis:
So I think we’re really in the early stages of the ASPM migration, and it’s probably worth looking at the CSPM world to say, “What might it look like?” Right? In CSPM, what we saw was the first iteration of it which really matches the second iteration of ASPM is, “Look, we’ll just tell you what all your problems are. We have an easy way to collect them in one place.” Cloud had that easy. It was like, “Oh, look, we just connect to AWS or GCP, and we pull everything, we do some analysis, and we hand you everything wrong in your cloud. Have a nice day.” Then, we’ll start doing that context to figure out what’s happening, and all of a sudden, CSPM becomes this new space that’s called CNAP, Cloud Native Application Protection.
I think ASPM is at the beginning of doing that change. The first iteration of ASPM, which people no longer really talk about, was inventory of tools and environments you apply them to. We’re not even going to measure how effectively you’re doing something. It’s more like, “Have you rolled out the right tools to the right place?” So I think that’s the old-school world. Now, the new world say, “Okay. Let’s take that knowledge of here’s all the tools you should have to protect your whole SDLC and start pulling that data all together in one place, and now start to figure out how does this connect.” It will be interesting to see, I think. Will ASPM continue taking the same leap as CSPM? Are they going to run into each other potentially someday? I’m always worried about what spaces will run into which spaces in the future because there is some overlap. If you do ASPM right, it will obviously include a whole bunch of elements of CSPM. I think that the converse isn’t necessarily true because not every application will be in the cloud.
Amir Kazemi:
Yeah. Wouldn’t you say that CSPM brought to light the visibility that AppSec eventually needed, right, that evolution to ASPM?
Andy Ellis:
Yes. Right, and I think you do see some of ASPM capabilities in the CSPM world. I think that’s one reason that it has caused Gartner and others to start redefining ASPM to say, “You have to be able to deal with the problems across from the moment the developer thinks of them and starts typing them until they’re deployed. This is not separate problems. It is one problem space.”
Amir Kazemi:
Yeah, yeah. Tying ASPM to security controls, I think there’s an opportunity for CISOs in terms of consolidation of some of their tools, and ASPM is one avenue or one way to provide some of that consolidation either through using native tools that a vendor provides or using your third-party tools. Do you have any thoughts on the balance between which ones I should rip and replace or what I should do across my AppSec program to drive some of that efficiency?
Andy Ellis:
Yeah. So I think the way that I look at it is you should figure out how much value you’re getting out of a tool, and that feeds into your rip-and-replace. If you have some legacy tool that you’re not getting any value out of, you should be like, “Let’s rip and replace this.” But if you’ve got something that’s providing value, maybe you wait, but you should always be strategic. I’m a fan of saying, “Look, if I’ve got 12 tools, I want to have a platform there. I do not want different tools.”
Now, it might be that I end up with six tools that I’ll feed into one platform, and I’m okay with that because that’s now only one security system. It doesn’t matter how many vendors I have. It’s how many touch points I’m actually going to use to run my security program. But if I know where I’m going to go, then my goal is say, “Okay. What’s the easiest path to get there?” because if I’m doing a big change, I want to be able to show value as early as possible in the change. So it might be that I’m adding a new layer on to collect a bunch of data points in, and then I’m going to start ripping out old systems, but nobody will really notice it because they’re only interacting with my platform that shows all the data. It’s like, “Oh, I need to replace my SaaS-ed vendor over here, and I’m going to do something else.” But the developers don’t see it because they don’t care where this result came from as long as it’s always coming through the same spot.
Amir Kazemi:
Yeah, yeah. You bring a good point on the developer side. Is there a developer productivity tax with too many of these AST tools?
Andy Ellis:
Oh, absolutely.
Amir Kazemi:
Okay.
Andy Ellis:
That’s why I don’t want the… Well, I want the development organization to own AppSec as much as possible. Individual developers should only have one dashboard. That should be our ultimate vision which says, “Here’s the software you own. Here’s all the problems in it. Here’s the ones you should fix next. Here’s your rating.” Imagine if I’m the CISO, right? I can go to the board and say, “I’m going to give our company a grade of 84%,” whatever that means. I suspect with Tim Brown coming under an SEC for prosecution, we’re going to have more CISOs who want to be very concrete about, “Oh, here’s all the bad findings.”
Well, I want you as a developer to be able to go into the same tool I used and get your personalized score that says like, “Oh, the company is at 84%, and you are at 62%.” Whatever those numbers mean, let’s not debate those today, but you are dragging the company down. You see this, your boss sees it, and you get it aggregated by organization, so your boss can be like, “Well, Amir is at 62%. Karina next to him is at 75%. I know who’s getting a better bonus this year. Amir, you’d better deal with this because clearly, you’re not prioritizing security, and the board is all over us and the CEO is all over us for having bad security.”
Amir Kazemi:
Yeah. No, that’s great. I think what we’ll do at this point, Andy, thank you so much, we’re going to go ahead and see if we can take a question or two.
Andy Ellis:
Okay.
Amir Kazemi:
Looks like we’ve got one question here from the audience. “So from a CISO point of view, what are some of the most important metrics you need your AppSec program to report on? Do you need to take those to the board meetings, or do you need to report on those to the board or the exec team, and how are you kind of thinking about that?”
Andy Ellis:
So, eventually, it should be going to the board in some fashion, but not necessarily upfront. The first metric is coverage like, “How many applications do you have, and what is your coverage of visibility and of controls?” Right? This is the hard part as most people don’t want to be like, “Oh, there’s 75 applications out there that I have no visibility into.” But you need to write that down. If you’re unwilling to say what you know you don’t know, then you’re never going to find out what you don’t know that you don’t know.
If I write down, “I’ve got 75 apps that I don’t know what’s going on here, and I can list them,” somebody in my organization is going to show up and say, “Uh, it’s not 75, Andy. It’s 130. Here’s the 55 you didn’t list.” But if I don’t list the 75, I’ll never get to 55, so let me just start with coverage of your controls. But I think at the high level, you’re going to look at saying… There’s some obviously known best practices around code hygiene that you’re going to want to say, “Oh, yes, we do input sanitization everywhere. That’s just a norm. What are all of our norms? Input sanitization, removing secrets from code. You have PII and the vaults.”
List what those are and what percentage of your applications meet your standards, and you should be able to defend every one of those standards. Like if you say, “Oh, it’s important that every vulnerability is fixed in 30 days,” I’m going to challenge you on that because most vulnerabilities don’t actually need to be fixed in 30 days. There’s some that do, but there’s a lot that it’s like, “Oh, we’re doing regular maintenance fixes. As long as we’re within 90 days, great.” Okay. What percentage of the time are you within 90 days? That should be… Yeah. Every SLA should be met 85% of the time. 15%, you’re going to miss your SLA. That’s okay. Document, understand why, clean up. But if you’re aiming for 100% SLA, your organization is going to hate you.
Amir Kazemi:
That’s awesome. Last question is around the topic of ASPM.
Andy Ellis:
Yep.
Amir Kazemi:
“From a CISOs perspective, how do I know that I need it and I need it now?” What’s your perspective on that?
Andy Ellis:
So I think the answer is if your revenue stream relies on some application that other people are interacting with or using, you probably need this now because you don’t know what you don’t know. If you have manually built an ASPM program, fantastic. If you could answer all the questions that came up here, you’ve got a metric for coverage for each of these things, you functionally have a homegrown ASPM. You should consider how many people it takes to collect the data. Could you use a different ASPM program to let those people do something more valuable? But I think if you don’t have applications, ASPM is probably not the right thing for you.
Amir Kazemi:
Yeah. No, I appreciate that. That’s super relevant, and I think it’s also… The other angle is size of organization, right? You might be too small for an ASPM. But if you’re just the right size or you don’t have the visibility, like you mentioned, you’re going to need something like that, right? You got-
Andy Ellis:
Yeah. If you’re a small organization, and you can itemize and name every application off the top of your head, and you can hold your whole security program in your head, you are the ASPM. That’s okay. I was that frog in the very early days. It was me and one other person. We were the ASPM for the company. At some point, that doesn’t scale anymore.
Amir Kazemi:
Yep. Exactly. I think that’s all we have time for for questions. How do people get in touch with you, Andy, if they want to follow you, get in touch to speak with you?
Andy Ellis:
Yeah. So they can find me on Twitter or LinkedIn, or my most common platforms. I’m CSOAndy on both of those. Go to csoandy.com to find me. I actually publish a newsletter. It’s supposed to be weekly, but with the chaos of the last month, it’s been more like fortnightly that I publish. You can find the link there.
Amir Kazemi:
Amazing, and if anybody needs any resources or any additional insight on ASPM, you can reach out to me at [email protected] or just reach out via LinkedIn. You can also visit our website at cycode.com. We also have great resources on cycode.com/resources as well. Look out for the next episode that’s going to be launching soon. Thank you, and have a great week. Thanks so much, Andy.
Andy Ellis:
Thanks for having me, Amir.
Amir Kazemi:
Thanks.