In the last article, we talked about the importance of pipeline security when it comes to the complete ASPM. Pipeline security broadly refers to everything around the security of your repositories and pipelines themselves. This article will be discussing Application Security Testing (AST).
AST is a hotly debated group of functionalities when it comes to security: on the one hand, it’s frequently said that individual scanners have been commoditized due to great open source options. On the other hand, the market makes an assumption that if something is an in-house point solution, it’s automatically “best in class.” The truth is much more complicated – coding languages vary drastically from one another, and solutions often have a hard time dealing with the complexities. Furthermore, security teams I talk to are often unaware of the benefits of doing all of their security testing from a single place.
In this article, we’ll cover the four major types of application security testing to better understand how they cover all of the files in a repository. Overall, the benefit of having all of these scanners in a single place is a single integration point covering all of the files in a repository.
Typical file types that make up a repository
Static Application Security Testing (SAST)
SAST has been around for a long time. It was one of the first application security testing methods to come out, and originally took hours to days to run over large code changes. Nowadays, developers expect SAST to run quickly and in pipeline, providing immediately applicable code changes for any language.
SAST is meant to detect misconfigurations in your first party code, in other words, when your developers accidentally don’t account for an exploit. Due to its ease of deployment, SAST has become the go-to technology for getting started securing applications. Of the files in your repository, SAST will scan anything that directly houses application code, for example .py or .js files. A common misunderstanding is when it comes to third party vulnerabilities – these will get detected by Software Composition Analysis (SCA, the next section), but not SAST; however, a SAST will detect potential issues that don’t have a CVE if you copy pasted code from a third party directly. This practice was more common before package managers like NPM were widely adopted.
A final concept to be aware of is Common Weakness Enumerations (CWEs). CWEs are categories of exploit types – such as SQL injection, Out of Bounds Reads, or Command Injections. CWEs are the backbone of understanding SAST findings, as they’re what categorize exploit types.
Software Composition Analysis (SCA)
The rise of open source adoption led to the rise of SCA scanners. At its root, SCA scanners look at the open source projects that make up your application, and detect vulnerable versions of those packages. Modern SCA tools continue to evolve to include features like checking if vulnerable functions from those dependencies are called, and checking for upstream malware alongside vulnerable versions
SCA scanning is a newer discipline, which has led to a lot of complexity determining its value to organizations. Ultimately, companies want to be protected from both supply chain exploits, and exploits that take advantage of misconfigurations in third party code you’re using. SCA is known for creating a lot of noise, as patching open source libraries can frequently take anything from a few minutes to kicking off massive migration projects.
At a basic level, SCA scanners work by analyzing your build files for library versions, and comparing it to vulnerability databases for known vulnerabilities. More or less maturity in this area comes down to enriching from other data sources, or analyzing your code for exploitability.
Infrastructure as Code (IaC)
Historically there’s been a divide between SAST + SCA, and IaC + Containers. As containerization continues in its adoption, this dividing line continues to shrink. It used to be the case that SAST and SCA were the realm of developers, while IaC and containers were the realm of infrastructure. Over time, applications and their underlying infrastructure have grown more closely united, with developers taking greater ownership of their entire product.
Instead of keeping these tools separate, I think it makes sense to combine them in a single place, as ultimately application knowledge is needed to fix issues on the code or infrastructure sides. Having separate tools for these capabilities has always caused issues, as security teams struggle to discover who’s responsible for a particular piece of infrastructure, or understand where it was deployed from.
Infrastructure as code vulnerabilities are detections of misconfigurations in your infrastructure. A simple example is forgetting to set encryption to true when setting up a new RDS database. A more complex example is detecting chained IAM permissions that allow a role takeover. Another point of consideration is that IaC has grown in importance with the adoption of helm and Kubernetes, as misconfigurations are frequently found in deployment definitions.
Moreover, ASPMs can identify patterns indicative of potential leaks. For example, a sudden spike in the volume of code being copied to external repositories or unusual access times by developers can trigger alerts for further investigation. This proactive approach ensures that any anomalies are detected and addressed promptly, reducing the risk of sensitive information being exposed.
Container Vulnerabilities
Container vulnerabilities can cause a lot of confusion because, as I’ve written about elsewhere, container scanning can also detect the same findings as SCA, only it requires the container to be built before it’s scanned. Having both scanning capabilities has benefits, as SCA scanning can be done before the container is built, but container scanning is required for finding OS level vulnerabilities in your applications.
Container scanning is especially important for compliance, as most frameworks still think in terms of operating systems as the primary underlying infrastructure that needs to be vulnerability scanned; however, they’re typically filled with false positives in containerized contexts due to containers not usually utilizing the full capabilities of the underlying operating system.
Ultimately, a few features are important for maximizing the value of container scanning. First, tying findings back to the dockerfile from which they were created is important for allowing developers to understand where a finding is coming from. Without this information, it’s extremely difficult to determine the source of a finding. Additionally, having flexible scanning options is important, as sometimes you may want to scan locally, in a pipeline, or the built container in the registry.
Overall, these scanning capabilities make up most of the benefits of a complete ASPM. Fundamentally what unites them is covering all of the files in your repository with a single integration, reducing the usually extensive management overhead that comes with coordinating several point solutions across these categories. Add the fact that most of these “best in class point solutions” are really using the same open source scanners under the hood, and there’s never been a better time to consolidate into a single scanner.