The Future of Application Security; From AppSec Chaos to Maturity with ASPM
For most security teams, software development presents an unmanageable attack surface with sprawling security tools and alert fatigue making it harder to remediate and reduce risks. Security teams are in AppSec chaos. How do you create mature application security controls, measure risk effectively and get visibility to the critical 1% of vulnerabilities faster. Don't miss our expert speaker - Andy Ellis, Operating Partner at YL Ventures, Advisory CISO at Orca Security and author of 1% Leadership, - as he discusses how to build effective and mature AppSec controls. Weโll also look at the future of application security, with a deep dive on Application Security Posture Management (ASPM) that allows you to bring all vulnerability alerts into a single pane of glass for immediate visibility, and quicker prioritization and remediation.
In this session you'll:
- Understand what elements are driving AppSec chaos
- Get CISO-tested frameworks for building mature AppSec controls
- Discover the role of ASPM in solving visibility, prioritization and remediation
Presented by:
Have questions or
want a custom demo?
Get a personalized demo and learn how you can develop secure software, faster with Cycode.
Amir Kazemi:
Hey, everyone. So welcome, and thanks for joining the AppSec Secrets Webinar Series brought to you by us, Cycode. Cycode. So our approach and philosophy to AppSec is that we think of it as more of like a team sport, and what this series really does is it brings together security, it brings together AppSec, dev, and business leaders like yourselves. Weโre essentially trying to bring all of you together to really flip that script on AppSec and discuss best practices and discuss challenges that youโre probably having in the space yourselves as well.
My nameโs Amir Kazemi, and Iโll be your host today for this episode, The Future of Application Security From AppSec Chaos to Maturity with ASPM. Super stoked to have security heavyweight, Andy Ellis, with me right here. Andy is a seasoned tech executive with a lot of expertise in the cybersecurity space. Heโs been the operating partner at YL Ventures, the Advisory CISO at Orca Security, and heโs also the author of 1% Leadership which you all receive a free copy of at the end of the webinar. Andy previously served as the Chief Security Officer at Akamai Technologies where he was responsible for the companyโs cybersecurity strategy over a 20-year tenure. But in general, just honestly cannot think of a better person to talk to today other than Andy regarding this topic. Yeah. Andy, did I cover everything regarding your background?
Andy Ellis:
Yeah, I think you covered the high points. I mean, we could go into the nitty-gritty details, but at some point, we just run out of time for the webinar.
Amir Kazemi:
For sure. Yeah. Letโs dive right into it. I wanted to start it a little bit broader just to kick it off. Letโs talk a little bit about the attack surface or the unmanageable attack surface. So how do you think that has evolved over time? I think it would be good to level set on what is an attack surface to even begin with.
Andy Ellis:
Yeah. So I think when people talk about the attack surface, we immediately jump to like, โWhere are the adversary touch points?โ
Amir Kazemi:
Yeah.
Andy Ellis:
Even before I jump into that, I had to think about where is the center of gravity of our organization because that drives an attack surface conversation. That might sound a little strange, but when we think about applicationsโฆ Letโs just go back for a little bit like pre-widespread-internet, like applications where itโsโฆ Oh, itโs a downloaded software, and so your attack surface was the piece of software that youโve shipped, and you didnโt really think about what was behind it, all the development infrastructure because there was almost none of that. Maybe you started to worry a little about how you delivered software updates became a touch point, but over time, weโve moved from that, obviously, to an internet-centric model, to now the internet is the business plane.
But as weโve thought about the attack surface for applications, weโre only starting to tackle pieces of what I think of as the true center of gravity which is the entire software development life cycle. Weโve talked about SDLC security, but in the history, we never really talked about the front end of applications as part of the SDLC, and I actually think that theyโre inseparable, and so weโveโฆ Look, Iโm to blame because we built WAFs into CDNs to say, โOh, look, weโll protect the front end of your applications,โ and as a result, people are like, โWell, thatโs not really part of AppSec.โ Weโre like, โOkay. Weโre dealing with that. Weโll do virtual patching,โ and it let us deal with things or not deal with things. Then, SBOMs came around, and people are like, โOh, we just need to know whatโs the software.โ
I think all of these become this touch point for the entire SDLC as our attack surface because the adversaries arenโt just trying to break our running applications, theyโre trying to seize control of our applications. Theyโre trying to get access to the data, and so you have to think about the whole life cycle of building an application is actually where your attack surface starts.
Amir Kazemi:
Yeah. Gotcha. Letโs say youโre a new CISO. How do you think about or even collect the data or the inventory around that attack surface? Can you aggregate that on a spreadsheet? Do you use a specific tool? How do you think about that?
Andy Ellis:
Yeah. So Iโm a huge fan of the simple spreadsheet model which is you always start with a spreadsheet as a new CISO, but your spreadsheet is never going to become detailed. As soon as you need detail, you need to have it in some other tool, but you use the spreadsheet to keep track of the stuff thatโs not in any other tool. So you might write down like, โMy SDLC,โ and you should write this question like, โHow many systems,โ however you want to define system, โare in my SDLC?โ As you learn that, you might say, โOh, hereโs all the things I have to start tracking. Iโve got to keep track of every source code repository, and oh my God, thereโs a lot of them. All of my build systems, and all of my developer desktops, and all of these things are part of my SDLC, and Slack is part of my SDLC.โ How many people actually cognitively think about your messaging system is part of your SDLC?
Amir Kazemi:
Yeah.
Andy Ellis:
But if I can tell a developer to accept a pull request via Slack, thatโs SDLC for you.
Amir Kazemi:
Exactly. Yeah. A lot of the times, that goes unknown, right? Thatโs not even covered or people arenโt thinking about that.
Andy Ellis:
Itโs really not even covered, and then youโre going to want to start to think about the outcomes as well.
Amir Kazemi:
Yeah.
Andy Ellis:
Right? Like, โOkay. Why am I measuring this? Whatโs the hazard? Whatโs the risk? What am I actually trying to do?โ
Amir Kazemi:
Yeah. So how would you say that this evolution has created this thinking around building your security programs? How are you thinking about building these security programs knowing the evolution of this attack surface?
Andy Ellis:
So I think one thing that people do get a benefit from is thereโs been this slide thatโs been shared for 30 years about the cost of fixing a bug in the waterfall development model that says like, โIncreases by 10X the later and later you get into it.โ My favorite thing is thereโs no actual study behind it. It was literally a thought paper that somebody put out, but it resonates with us, and it feels kind of appropriate in a waterfall world.
In a non-waterfall world, it really no longer does because you have to say, โLook, fixing bugs is not really that expensive if you truly are agile. Itโs not fixing bugs is whatโs really expensive. So how are you detecting and cleaning? What does remediation look like? How is this entirely part of your life cycle?โ Because I think if your goal is to say, โWeโll never deploy a software that has a defect in it,โ then youโre setting yourself up for failure.
Too many organizations, I think, have that as an implicit assumption, but I think you need to start to have this explicit thing which is we need to be able to find defects anywhere from ideation to deployment, and how do we quickly detect, fix, prevent them from going out, especially if weโve already fixed them once. I think as a CISO, thatโs always the most embarrassing thing is when you do some remediation campaign, you clean something up, and then a new piece of software came off of a different branch and reintroduces a vulnerability.
Amir Kazemi:
Yeah, yeah. What about maturity, or how are you measuring the effectiveness of these security programs?
Andy Ellis:
Yeah. So measurement is hard, and most people focus on measurements of activity rather than measurements of effectiveness because itโs really easy to say, โWell, how many bugs did we fix?โ Then, you get into the, I think, the Dilbert cartoon of, โWell, if you measure people on how many bugs they fix, youโre incentivizing them to introduce more bugs.โ which might not be that they intentionally, deliberately write bad code, but theyโre like, โOh, I could take this one defect and write it up as five bugs, so I fixed five defects.โ
Amir Kazemi:
Yeah.
Andy Ellis:
Right? I think the real question has to be like, โWhat do you think is effectiveness?โ The effectiveness has to be things like, โEven if breached, our application doesnโt reveal X.โ So some of it goes in the design. How many applications have too much data accessible at the end? So thereโs a designer like, โHow do we remove secrets?โ How many times have we heard the, โOh, somebody posted a piece of code on GitHub as an example of their work, but it had keys in it?โ
Amir Kazemi:
Mm-hmm.
Andy Ellis:
Right? That demonstrates an ineffective security program.
Amir Kazemi:
Yeah.
Andy Ellis:
So I think thatโs almost what you want to start looking for is say, โIโll take keys as this example and sayโฆโ or any secret, right? Any secret that is in your code base is a weakness waiting to be exploited, right? Itโs a hazard that this secret can get out. So one measure of effectiveness becomes like, โHow many secrets have you gotten rid of? How many secrets are protected by whatever your vaulting solution is? Do you move them into a vault, you make them accessible only via API, not just written into the code?โ The more you do that, the more comfort you feel that youโve implemented an effective control. So think about your effective controls, and then track implementation, not the activity to keep playing whac-a-mole.
Amir Kazemi:
Yeah, yeah. Are you thinking about that measurement a little bit differently across the CISO level, the AppSec level, and the individual level as well?
Andy Ellis:
Yeah. Yeah, I think at the CISO level, you basically want to say, โLook, we have some set of standards for our code base, and what does adoption look like at the high level?โ Right?
Amir Kazemi:
Sure.
Andy Ellis:
โWhat percentage of these standards that are meaningful have we vetted to show they would work, and what parts of our SDLC actually implement those in a way that weโre comfortable with?โ Right? Maybe you track that by business unit. Maybe say, โOh, look, hereโs the new thing weโre going to roll out and implement.โ So the number starts out low, and itโs going to grow. But as a CISO, you want one slide about AppSec, and even thatโs almost too much. Right?
Amir Kazemi:
Yeah.
Andy Ellis:
You want to be able to just summarize and say, โEnd to end, hereโs what my AppSec program looks like. Hereโs the high-level 12 principles we have, and hereโs what adoption is.โ But if Iโm a developer, I need to go into detail, right? If Iโm the AppSec engineer, Iโve got to be able to go in and say like, โHereโs specific problems. Hereโs all the code bases with issues. What are we going to fix? What do we need to write new?โ rather than just like, โItโs not a bug to be fixed if itโs an architectural flaw.โ Thatโs the thing a lot of people miss in the AppSec space is sometimes these are defects in how we wrote the software, not in what got written.
Amir Kazemi:
Yeah.
Andy Ellis:
If youโve written secrets into code, thereโs not a bug fix here. You need something to manage secrets. Thatโs new capability.
Amir Kazemi:
Yeah. Yeah. Exactly. You need something to manage secrets, but also, it could also be a cultural thing as well, right?
Andy Ellis:
Right. Yeah, and Iโm a big fan of looking at, โOrganizationally, where do you have common problems so that you can tackle them?โ I recall something one of my AppSec manager did at Akamai a long time ago. We went and did a web app analysis, the very standard hire a third-party to come in, and literally, theyโre just doing manual fuzzing, and they find a million SQL injections and all these problems. We know what the right solution is, which is you need to write an input sanitization library and just run everything through the sanitizer. Right?
Amir Kazemi:
Yeah.
Andy Ellis:
But thereโs no way that the engineers wouldโve accepted that if we had set that upfront, and I canโt say that as the CISO because everybody would listen to me and be like, โOh my god, Andy, youโre so negative. Why do you not believe this would happen?โ I canโt say, โWell, I have a lot of experience not only with this engineering team, but a lot of them in general.โ
Amir Kazemi:
Yeah.
Andy Ellis:
So what our AppSec engineer did was said, โOkay. Well, Iโm going to go give them one to fix.โ Right? They went, and they did the obvious like, โOh, hereโs the string that was the exploit.โ So they literally hard-coded into their code like that string, and then as soon as they fixed it, he walked back in with the next one. He was like, โOh, hereโs the obvious bypass that I already had planned for what you did,โ and then the next month, brought them five things that were similar. He kept doing this until they said, โWeโre tired of playing whac-a-mole. How should we solve this?โ At which point, now we had a conversation, they wrote input sanitization, and now we brought them and said, โOh, by the way, youโve written this great library, you actually have 50 different apps that need to use the library.โ
So it was this campaign that maybe took us longer than it wouldโve if we believed everybody wouldโve done what we wanted right upfront, but they now believed it was their solution. I think it was done faster than it wouldโve been had we just hammered them with like, โHereโs a thousand findings.โ So my engineer needed to track that. They needed access to every one of these defects. I did not. Thatโs a really important thing to understand is at different levels, you need different operational visibility.
Amir Kazemi:
Gotcha. Yeah, yeah. No, thatโs super important. You touched on secrets a little bit earlier. We also touched on the attack surface. What about security controls in general? How do you know which security controls to put in place, given you may know what the attack surface is? Right? Where do you start? Where do you start, and how do you think about that?
Andy Ellis:
So I like drawing on Nancy Levesonโs work, and I think weโre probably a little too tight in for me to point out the book rightโฆ No, itโs right there. Itโs that blue one that Iโm just pointing out here. Itโs called Engineering a Safer World. Actually, itโs a safety engineering program, not a security engineering program, but thereโs a lot of parallels. Very simply, you donโt have to go buy the book if you donโt want to. What you can apply is first, talk about unacceptable losses. What is the bad outcome that could happen? So you look at your AppSec space, and you say like, โWhatโs the worst outcome?โ Donโt think about how yet, right? But you say, โOkay. We publish all of our customer data thatโs exposed.โ Right?
Amir Kazemi:
Mm-hmm.
Andy Ellis:
Thatโs unacceptable loss. Okay. We all agree on that. Now, you can start to talk about the hazards that lead to it. You say, โWell, inside our system, our application has access to all of the customer data all at once. The application has the ability to pull everything from the database.โ Thatโs just a hazard, and, โOh, look, the administrators can have the application.โ You connect these hazards. Then, at some point, youโre like, โOh, now, I can talk about a scenario. What if an adversary compromises an administrator credential, connects into the application, and dumps the table, and walks away with it?โ Does that feel plausible? Absolutely. Basically, every breach ever sounds something like that.
Now, you can say, โOkay. What would be the controls that would protect against this? Iโve got this story, simple narrative. Itโs almost like telling a fairy tale.โ Iโm like, โItโs little Red Robin Hood. Youโre just reversing it and saying, โWhat would we do to stop it?'โ Itโs like, โOh, maybe one thing we want to do is not have the application actually able to look at the entire customer database at once. Thereโs no reason that the same application our customers use actually can pull all that data. Right? It should be stored queries. It should only be able to pull up one customer record at a time.โ
Make it a lot harder so you donโt have these accidental breaches. You say, โOkay. On the front end, maybe itโs about implementing multifactor authentication for my administrators, or maybe itโs giving my administrators a whole different way to connect and access this data.โ Right? So you just tell these narratives, and once you have the narrative, the controls really pop out at you. My favorite ones are always multifactor authentication, eliminate as much as you can, administrator credential bloat. Right?
Amir Kazemi:
Yeah.
Andy Ellis:
If thereโs basically one administrator who has access to every system in your application or worse, every laptop in your environment because then you get access to every user, those two things are basically what will cripple you. Then, you can say, โOkay. Letโs look atโฆ Where is my data? How do I isolate my data?โ You just build on top of that.
Amir Kazemi:
Yeah. Yeah. I like that frame of thinking where you talk aboutโฆ or you start off with, โWhat are the unacceptable outcomes for the business,โ right, โor the program?โ
Andy Ellis:
Right.
Amir Kazemi:
Then, thatโs how you kick it off. Then, do you use that or that frame of thinking to map to the perimeter or the attack surface that you found?
Andy Ellis:
So I think you donโt do it directly, but youโll do it indirectly.
Amir Kazemi:
Indirectly? Okay.
Andy Ellis:
Because youโre trying to buildโฆ What is the sequence? Whatโs the fairytale about what an attacker could do to exploit hazards from the outside? Somebody who used to work for me called it adversary powers. Right?
Amir Kazemi:
Yeah.
Andy Ellis:
So, first, assume every adversary has the power to connect to the internet, to send the email, and to run Metasploit.
Amir Kazemi:
Mm-hmm.
Andy Ellis:
Okay. With those three powers, what can they do to you to get another power? Like, โOh, they can send email that contains a malicious payload that somebody might click on,โ and itโs like, โOkay. Well, if Iโm subject to that vulnerability like, โOh, if you click on a payload, you get access to X,โ great, now the adversary can upscale their power, and at some point, they have the power to do a negative thing to me.โ
Amir Kazemi:
Yeah.
Andy Ellis:
But youโre always aimed at those unacceptable losses, and the reason thatโs important to think about what the unacceptable losses are is that they donโt match to your assets.
Amir Kazemi:
Gotcha.
Andy Ellis:
If you start from an asset base, think about losing assets, and think about the airplane, the airline industry, right?
Amir Kazemi:
Yeah.
Andy Ellis:
You can list all of the airlineโs assets. Passengerโs lives are not an asset of the airline. You would never write them down as an asset. But when you think about unacceptable losses, right at the top, killing your passengers.
Amir Kazemi:
Yeah.
Andy Ellis:
Right?
Amir Kazemi:
Of course.
Andy Ellis:
So, now, as you think about how would you protect an airline, you want to think about, โWell, what are all of the ways that passengers could die in the care of an airline, and do you have controls that would minimize those hazards?โ
Amir Kazemi:
Mm-hmm. Yep. I love that. I love that frame of thinking. Once you establish those controls, how do you get to a point of like, โOkay. Now, I have executive trust, and itโs almost on autopilot in a way.โ Right?
Andy Ellis:
So I think itโs pretty rare for people to get to a trusted autopilot. Itโs often like people donโt want to pay attention, and when you think about AppSec, because companies donโt tend to think about the SDLC from end to end as being that center of gravity, AppSec is all too often just thought of as a point problem like, โOh, we need to do better code reviews,โ โOh, we need to do better secrets management,โ โOh, we need to do each of these individual things,โ but weโre not actually thinking holistically. The more things you have to deal with, the more likely it is that your executives are going to turn a blind eye to you. Itโs like, โOh, weโre doing 17 things around AppSec.โ โWell, how many things should you do?โ is not actually their next question. They assume if youโve got 17 programs, that this is well taken care of because you would not be investing in 17 programs otherwise.
Amir Kazemi:
Yeah. So I think itโs almost inevitable. You tell me. But as you build out more and more controls, youโre essentially introducing tools for all or thereโs too many tools across a program.
Andy Ellis:
Absolutely.
Amir Kazemi:
How are CISOs today or even AppSec leaders thinking about managing that or balancing that, that thought or that program in that respect?
Andy Ellis:
So I think as youโre looking at that like, โOh, Iโve got so many things,โ is the question always has to be like, โWhat is buying you good defenses? What is effective, and what does run on its own?โ You did talk about, โOh, do you have something that everybody feels comfortable with?โ Right? Iโm a firm believer that the more you can get the developers to invest in, the better off you are. So I prefer developer self-service on AppSec tools way more than having the security team run them.
Amir Kazemi:
Yeah.
Andy Ellis:
So itโs like, โOh, if you get the developers to buy in, some of the bestโฆโ When I was at Akamai, we had a huge set of things around source code security that were actually all built by developers. I would often consult with the developers, but they were the ones that were like, โOh, we need to have authenticated check-ins.โ This is back in 2000. Authenticated check-ins were barely a thing. The most common source code repository on the market did not support them. It literally was clear text. Anybody could check in and claim who they are, and our developers were the ones who were like, โNo, no, no. Weโre going to wrap this in an SSH tunnel so we can see who actually did the check-in,โ and they built this whole thing to do it. Now, I donโt have to maintain it. Itโs their solution. So I get to think about it as part of my AppSec program, but itโs not a tool Iโm in charge of. I want to run platforms as a CISO. I donโt want to run point solutions.
Amir Kazemi:
Yeah, yeah. Well, I mean, and thatโs an interesting point around that collaboration between dev and security. Oftentimes, thereโs tension, right? So how do you not just minimize that tension, but build a better relationship between the two orgs, especially as a new CISO, a new leader coming into a new role? What are your thoughts on that?
Andy Ellis:
So I think thereโs a lot of different ways, but the first is to recognize that if you need somebody else to build and implement a solution, then they need to believe in the problem.
Amir Kazemi:
Yeah.
Andy Ellis:
You canโt just lecture them your way to get them to believe in it. I love doing theโฆ If it takes me 10 arguments to convince you, well, can I do it in eight? Not that itโs more efficient, but I leave the last two for you because itโs like your brain is like, โOh, Iโve got this, and this, and this, this, this, and oh, and this, and this,โ you believe it way more than if I told you the whole time. So the more I can get people to finish that argument and do that education and awareness.
Then, on the flip side, the more itโs clear that I understand whatโs going on on the developer side. Often, you have a developer productivity or developer tools team that owns the SDLC infrastructure. Theyโre going to be your biggest partner as a security professional and if you can find things that would improve security and make their life easier or match with their goals. Hereโs a simple one which is if youโre a company whoโs been around for a while, you probably have a lot of legacy infrastructure in your SDLC. Go talk to your dev team and say, โHey, what is the actual latency for rolling out software? If we need to make a major change like OpenSSL just drops a new issue, what is our wall clock latency, and what is our cost to get there?โ Youโll often be shocked at the answer.
I actually looked at this when I was at Akamai. Itโs like we actually couldnโt roll out software without massive expense that had to go to our whole network. In fact, when it was at its worst, fixed before I left, but at its worst, it was literally, we would disrupt one product release. When I say by disrupt one product release, you would only get so many product releases a year, and one of them would get taken to deal with the vulnerability. At that point, the developer teams hated me. If I walked in and said, โOh, we have to fix this,โ theyโre like, โNo. You just destroyed an entire product release to do that. Everything has now slipped by however many weeks.โ
So we went, and we championed this set of programs that the developer productivity team had been trying to get prioritized. They were all about CICD efficiency. They wanted to go in, and itโs like, โOkay. Weโre going to bring down release time by this much.โ All of a sudden, the CISO has been championing release productivity and release efficiency. When somebody said, โWhy are you doing this?โ I said, โWell, just do the math onโฆ Right now, when I say we need to fix something, I have to spend three weeks fighting the whole company to convince you. Not because you donโt want to fix stuff, but because the cost is so high. If I can bring that cost down, you wonโt fight me. You believe it needs to be fixed, and now youโll just go do it.โ
Amir Kazemi:
Yeah. Itโs also youโre putting yourselves in their shoes, right?
Andy Ellis:
Right.
Amir Kazemi:
Trying to help them improve the developer experience overall, so.
Andy Ellis:
Yeah, and one of my favorite things is like if youโre in the same meetings with different people and you hear what their common critiques are either of your request or other peopleโs, next time youโre in a meeting, if you know what theyโre going to say, what theyโre going to object to, say it for them. Like if youโve got aโฆ One of my best partners, she was responsible at the time for professional services like, โOh, we need to fix this thing in how our application works, but itโs going to require our customers to all make a change.โ Right?
The first time we did it, sheโs like, โWell, hereโs what the cost will be, every professional services person interacting with every customer, boom.โ Then, going forward, I would just always say, โOh, have we thought about the impact on professional services for this proposal?โ I now have an ally whoโs like, โOh, you see me. You know what my pain is. Even if youโre not saying thatโs too much, youโre at least asking the question for me.โ Then, I would know. If I wasnโt in the room, sheโd be like, โWell, is this secure? Does this meet our security requirements?โ because I was speaking the language of someone else. So the more you can speak the language of the developer like, โWhat is their actual pain point?โ the more likely you can drive an AppSec program that they will appreciate.
Amir Kazemi:
I love that. Yeah. I love that. Coming back to this security controls topic, obviously, tool sprawl is a thing.
Andy Ellis:
Yep.
Amir Kazemi:
How do you think that that has affected visibility for security teams in general?
Andy Ellis:
So I think a big challenge is if youโre a security team, going and interacting with different tools is a pain. So some teams, if theyโre really big, you have the team that manages the tool, and theyโre each prioritizing their own thing. But if youโre not looking at that in an integrated way, what will happen is you have your person whoโs looking at the pen testing results, and they come to a development team and say, โHereโs all of my findings from this pen test. Go fix them.โ But theyโre walking in the day after the compliance team showed up and said, โHereโs all of our findings about how youโre not satisfying the processes we wrote down for SOC 2, right?
Amir Kazemi:
Yeah.
Andy Ellis:
Then, your SaaS team comes in. Itโs like, โHereโs the application security, all the defects we found looking at your code.โ So you start working across purposes when you have this sort of sprawl. Honestly, most tools suck at helping developers read them and decide what to do.
Amir Kazemi:
Yeah.
Andy Ellis:
We use this phrase, โalert fatigue,โ a lot in the industry. First, actually, I want to say the fatigue is real. I hate the fact that we call it alerts. I came out of an ops background. โAlertโ means drop what youโre doing and go solve this. Itโs like an alert is you have been breached, something is down, incident. You ran a source code analysis tool and found that I did a thing that maybe I shouldnโt do. Itโs not an alert. Yet, thatโs what most tools are doing is theyโre giving you hygiene findings, and theyโre important hygiene findings.
I donโt want to minimize all of them, but I do recall I was told onceโฆ I had a system that did a port scan and vulnerability analysis, and said we needed to turn off ICMP timestamp replies, which Iโm actually in favor of doing. โI donโt need to do this. Why am I doing the processing at all?โ But the argument was somebody would know what time was on my servers. Iโm like, โHave you not heard of NTP? Pretty sure he knows what time is running on all of my servers because I have an NTP constellation that works.โ In the early parts of my career, and I see people repeat this, you get this report from your tool, and you hand it to a developer and say, โFix everything.โ They see a thousand things, and theyโre like, โNo. Please prioritizeโฆโ They will argue with you for months over whose job it is to prioritize the list of things.
Amir Kazemi:
Between security and dev?
Andy Ellis:
Between security and dev.
Amir Kazemi:
Yeah.
Andy Ellis:
Now, nobody is actually fixing. Think about how much time we waste arguing about whether this should be done before that, and now itโs a thousand things. Itโs worth that argument if youโre the developer, but in the meantime, nothing is actually getting fixed.
Amir Kazemi:
Yeah. That brings up the topic of prioritization. Who owns prioritization? Who owns prioritization? How do you actually narrow down to the right critical alerts when you have all this tools for all, when you have this alert fatigue, right?
Andy Ellis:
Right.
Amir Kazemi:
Thereโs this constant mess which we call โAppSec chaos.โ So how do you focus on the right things?
Andy Ellis:
Yes. So, ultimately, I actually think that the dev team owns prioritization. I know any developer whoโs listening is going to be like, โWell, youโre just saying that because youโre InfoSec.โ But I actually do mean that, that I think they own it, but I think they should also own the tools. The problem is we bring the tools, and since they own the prioritization, thereโs no incentive on us to fix the prioritization that the tools are providing.
So to solve this, the tools need to have the context. They need to help and say, โLook, here are the things you should go fix right now. You have five minutes. Go fix this thing. If youโre building a tool, that should be the number one thing in your dashboard is like you know what level someone is in the organization. Assume they only have five minutes, what should they do with it?โ If itโs the CISO, itโs like, โWho do you call? I have five minutes. My single greatest power is the ability to pick up the phone.โ So if you said, โYou need to pick up the phone. Call this engineering manager and say, โYour team hasnโt fixed anything in six months,'โ thatโs the most effective thing I can do with five minutes of my time.
Amir Kazemi:
Yeah.
Andy Ellis:
Right? Arm me for that phone call.
Amir Kazemi:
Yeah.
Andy Ellis:
But if Iโm a developer who pops in, itโs like, โYou have five minutes.โ Itโs like, โHey, we have an input sanitization library that you are not using. Hereโs how to integrate it. Hereโs our recommendation and best practice,โ or, โWeโre doing tokenization. Hereโs our thing,โ but what is that five minutes? Then, the next thing is like, โWhat does my path look like?โ You have to sell people on a vision, not a treadmill because if I say, โLook, what I need from you every day is fix one bug,โ youโre like, โIโm not really excited by that.โ But if I come to you, and I say, โLook, hereโs what weโre going to do. We are going to have an application thatโs going to look like this from a security and safety perspective. People will trust it with their data,โ you can get on board with that when I say, โWeโre going to be implementing a vault for all PII. Weโre going to be removing all secrets from code. Hereโs the list of things weโre doing.โ
You as the developer, you can have some say in how those get rolled out to you. When do you want each one maybe, or do you want to roll them out to all of your apps at once, or do you want to pick an app?โ In fact, on the pick-an-app point, this is the thing I feel like Iโve been fighting with every, every security person my entire career which is if you have more than one application in your environment, which Iโm sure almost everybody does, and youโre like, โHereโs this critical problem that we want to go fix across all of our applications,โ you should never try to roll it out to your flagship applications first.
I know thatโs where all your data is. You want to go protect it, but everybody will fight you. Itโs like, โNo, no. Go to marketing and be like, โHey, youโve got this web app. Weโre going to roll out this privacy vault code for you,โ and theyโre going to be like, โI donโt have any privacy data.'โ Weโre like, โGreat.โ Itโs like, โWeโre going to roll it out, it should have no impact on you, but we just want to make sure it works on our platform, everybody is comfortable with it,โ or let them pick. Youโre like, โI donโt care. Roll it out to some set of applications. At some point, weโll go after the flagship, but the flagship should never be first.โ
Amir Kazemi:
In terms of remediation, where are the developers expecting? So prioritization is done, for example, right?
Andy Ellis:
Yep.
Amir Kazemi:
Youโre getting down to the right critical alerts, but what about remediation? How do you getโฆ 10,000-person organization, how are you getting the right alerts to the right people in the most efficient manner? I think thatโs a hot topic right now.
Andy Ellis:
Oh, it absolutely is, and so many organizations, itโs just the institutional memory of some program managers inside the security team. Theyโre like, โOh, I know that this application is owned by Amir, and so Iโll call Amir and say, โHey, I got some stuff for you.'โ We need to get to programmatic remediation where we can identify a problem, we know that theโฆ like, โHereโs what needs to be fixed.โ We know who needs to fix it. We can give them like, โHere is your dashboard. So, Amir, hereโs all of the problems in your environment.โ You donโt need a security person to collect that data for you. Itโs just automatically all there.
Thatโs step one, and then we say, โOkay. Now, you need to start remediating, so how can we make it that youโre doing the same remediations at once?โ So itโs like, โOh, weโre going to do secrets removal,โ or give you the choice to be like, โHereโs 12 different problems across the 10 apps youโre responsible for. You got to fix 120 things. Do you want to fix one problem across all 10 apps, or do you want to fix all 12 problems in one app?โ That should be your choice, but right now, the humans that interact with you decide for you, except what they really decide is like, โWhatโs the direct ordering of the 120?โ So itโs almost the least efficient way to remediate.
Amir Kazemi:
Yeah. Got it. Thereโs an emerging trend, and obviously, this acronym called ASPM.
Andy Ellis:
Yep.
Amir Kazemi:
Application Security Posture Management. I feel like this kind of category has evolved over time.
Andy Ellis:
Very much so.
Amir Kazemi:
Yeah. So Iโm curious of your thoughts onโฆ The crux of it is itโs all about visibility, prioritization, remediation. How are you seeing CISOs evolve their practices and their programs to implement and adopt platforms like ASPM tools?
Andy Ellis:
So I think weโre really in the early stages of the ASPM migration, and itโs probably worth looking at the CSPM world to say, โWhat might it look like?โ Right? In CSPM, what we saw was the first iteration of it which really matches the second iteration of ASPM is, โLook, weโll just tell you what all your problems are. We have an easy way to collect them in one place.โ Cloud had that easy. It was like, โOh, look, we just connect to AWS or GCP, and we pull everything, we do some analysis, and we hand you everything wrong in your cloud. Have a nice day.โ Then, weโll start doing that context to figure out whatโs happening, and all of a sudden, CSPM becomes this new space thatโs called CNAP, Cloud Native Application Protection.
I think ASPM is at the beginning of doing that change. The first iteration of ASPM, which people no longer really talk about, was inventory of tools and environments you apply them to. Weโre not even going to measure how effectively youโre doing something. Itโs more like, โHave you rolled out the right tools to the right place?โ So I think thatโs the old-school world. Now, the new world say, โOkay. Letโs take that knowledge of hereโs all the tools you should have to protect your whole SDLC and start pulling that data all together in one place, and now start to figure out how does this connect.โ It will be interesting to see, I think. Will ASPM continue taking the same leap as CSPM? Are they going to run into each other potentially someday? Iโm always worried about what spaces will run into which spaces in the future because there is some overlap. If you do ASPM right, it will obviously include a whole bunch of elements of CSPM. I think that the converse isnโt necessarily true because not every application will be in the cloud.
Amir Kazemi:
Yeah. Wouldnโt you say that CSPM brought to light the visibility that AppSec eventually needed, right, that evolution to ASPM?
Andy Ellis:
Yes. Right, and I think you do see some of ASPM capabilities in the CSPM world. I think thatโs one reason that it has caused Gartner and others to start redefining ASPM to say, โYou have to be able to deal with the problems across from the moment the developer thinks of them and starts typing them until theyโre deployed. This is not separate problems. It is one problem space.โ
Amir Kazemi:
Yeah, yeah. Tying ASPM to security controls, I think thereโs an opportunity for CISOs in terms of consolidation of some of their tools, and ASPM is one avenue or one way to provide some of that consolidation either through using native tools that a vendor provides or using your third-party tools. Do you have any thoughts on the balance between which ones I should rip and replace or what I should do across my AppSec program to drive some of that efficiency?
Andy Ellis:
Yeah. So I think the way that I look at it is you should figure out how much value youโre getting out of a tool, and that feeds into your rip-and-replace. If you have some legacy tool that youโre not getting any value out of, you should be like, โLetโs rip and replace this.โ But if youโve got something thatโs providing value, maybe you wait, but you should always be strategic. Iโm a fan of saying, โLook, if Iโve got 12 tools, I want to have a platform there. I do not want different tools.โ
Now, it might be that I end up with six tools that Iโll feed into one platform, and Iโm okay with that because thatโs now only one security system. It doesnโt matter how many vendors I have. Itโs how many touch points Iโm actually going to use to run my security program. But if I know where Iโm going to go, then my goal is say, โOkay. Whatโs the easiest path to get there?โ because if Iโm doing a big change, I want to be able to show value as early as possible in the change. So it might be that Iโm adding a new layer on to collect a bunch of data points in, and then Iโm going to start ripping out old systems, but nobody will really notice it because theyโre only interacting with my platform that shows all the data. Itโs like, โOh, I need to replace my SaaS-ed vendor over here, and Iโm going to do something else.โ But the developers donโt see it because they donโt care where this result came from as long as itโs always coming through the same spot.
Amir Kazemi:
Yeah, yeah. You bring a good point on the developer side. Is there a developer productivity tax with too many of these AST tools?
Andy Ellis:
Oh, absolutely.
Amir Kazemi:
Okay.
Andy Ellis:
Thatโs why I donโt want theโฆ Well, I want the development organization to own AppSec as much as possible. Individual developers should only have one dashboard. That should be our ultimate vision which says, โHereโs the software you own. Hereโs all the problems in it. Hereโs the ones you should fix next. Hereโs your rating.โ Imagine if Iโm the CISO, right? I can go to the board and say, โIโm going to give our company a grade of 84%,โ whatever that means. I suspect with Tim Brown coming under an SEC for prosecution, weโre going to have more CISOs who want to be very concrete about, โOh, hereโs all the bad findings.โ
Well, I want you as a developer to be able to go into the same tool I used and get your personalized score that says like, โOh, the company is at 84%, and you are at 62%.โ Whatever those numbers mean, letโs not debate those today, but you are dragging the company down. You see this, your boss sees it, and you get it aggregated by organization, so your boss can be like, โWell, Amir is at 62%. Karina next to him is at 75%. I know whoโs getting a better bonus this year. Amir, youโd better deal with this because clearly, youโre not prioritizing security, and the board is all over us and the CEO is all over us for having bad security.โ
Amir Kazemi:
Yeah. No, thatโs great. I think what weโll do at this point, Andy, thank you so much, weโre going to go ahead and see if we can take a question or two.
Andy Ellis:
Okay.
Amir Kazemi:
Looks like weโve got one question here from the audience. โSo from a CISO point of view, what are some of the most important metrics you need your AppSec program to report on? Do you need to take those to the board meetings, or do you need to report on those to the board or the exec team, and how are you kind of thinking about that?โ
Andy Ellis:
So, eventually, it should be going to the board in some fashion, but not necessarily upfront. The first metric is coverage like, โHow many applications do you have, and what is your coverage of visibility and of controls?โ Right? This is the hard part as most people donโt want to be like, โOh, thereโs 75 applications out there that I have no visibility into.โ But you need to write that down. If youโre unwilling to say what you know you donโt know, then youโre never going to find out what you donโt know that you donโt know.
If I write down, โIโve got 75 apps that I donโt know whatโs going on here, and I can list them,โ somebody in my organization is going to show up and say, โUh, itโs not 75, Andy. Itโs 130. Hereโs the 55 you didnโt list.โ But if I donโt list the 75, Iโll never get to 55, so let me just start with coverage of your controls. But I think at the high level, youโre going to look at sayingโฆ Thereโs some obviously known best practices around code hygiene that youโre going to want to say, โOh, yes, we do input sanitization everywhere. Thatโs just a norm. What are all of our norms? Input sanitization, removing secrets from code. You have PII and the vaults.โ
List what those are and what percentage of your applications meet your standards, and you should be able to defend every one of those standards. Like if you say, โOh, itโs important that every vulnerability is fixed in 30 days,โ Iโm going to challenge you on that because most vulnerabilities donโt actually need to be fixed in 30 days. Thereโs some that do, but thereโs a lot that itโs like, โOh, weโre doing regular maintenance fixes. As long as weโre within 90 days, great.โ Okay. What percentage of the time are you within 90 days? That should beโฆ Yeah. Every SLA should be met 85% of the time. 15%, youโre going to miss your SLA. Thatโs okay. Document, understand why, clean up. But if youโre aiming for 100% SLA, your organization is going to hate you.
Amir Kazemi:
Thatโs awesome. Last question is around the topic of ASPM.
Andy Ellis:
Yep.
Amir Kazemi:
โFrom a CISOs perspective, how do I know that I need it and I need it now?โ Whatโs your perspective on that?
Andy Ellis:
So I think the answer is if your revenue stream relies on some application that other people are interacting with or using, you probably need this now because you donโt know what you donโt know. If you have manually built an ASPM program, fantastic. If you could answer all the questions that came up here, youโve got a metric for coverage for each of these things, you functionally have a homegrown ASPM. You should consider how many people it takes to collect the data. Could you use a different ASPM program to let those people do something more valuable? But I think if you donโt have applications, ASPM is probably not the right thing for you.
Amir Kazemi:
Yeah. No, I appreciate that. Thatโs super relevant, and I think itโs alsoโฆ The other angle is size of organization, right? You might be too small for an ASPM. But if youโre just the right size or you donโt have the visibility, like you mentioned, youโre going to need something like that, right? You got-
Andy Ellis:
Yeah. If youโre a small organization, and you can itemize and name every application off the top of your head, and you can hold your whole security program in your head, you are the ASPM. Thatโs okay. I was that frog in the very early days. It was me and one other person. We were the ASPM for the company. At some point, that doesnโt scale anymore.
Amir Kazemi:
Yep. Exactly. I think thatโs all we have time for for questions. How do people get in touch with you, Andy, if they want to follow you, get in touch to speak with you?
Andy Ellis:
Yeah. So they can find me on Twitter or LinkedIn, or my most common platforms. Iโm CSOAndy on both of those. Go to csoandy.com to find me. I actually publish a newsletter. Itโs supposed to be weekly, but with the chaos of the last month, itโs been more like fortnightly that I publish. You can find the link there.
Amir Kazemi:
Amazing, and if anybody needs any resources or any additional insight on ASPM, you can reach out to me at [email protected] or just reach out via LinkedIn. You can also visit our website at cycode.com. We also have great resources on cycode.com/resources as well. Look out for the next episode thatโs going to be launching soon. Thank you, and have a great week. Thanks so much, Andy.
Andy Ellis:
Thanks for having me, Amir.
Amir Kazemi:
Thanks.