Over the last several weeks, Lapsus$ has taken down a who’s who of software development teams: NVIDIA, Samsung, Vodafone, Ubisoft, and Mercado Libre. And last night the stakes were raised significantly when Lapsus$ both confirmed the rumors of breaching Microsoft by releasing 37GB of data as well as posting screenshots suggesting they may have breached Okta.
This begs the questions of who is next and what can you do now to reduce your risk?
Oh man, if this it what it looks (Okta got popped)… Blue Team everywhere is gonna be crazy busy. pic.twitter.com/PY4dIzfwvM
— _MG_ (@_MG_) March 22, 2022
What We Do Know
While we don’t know exactly how Lapsus$ is getting access to this data we do know that:
- Lapsus$ has publicly appealed to insiders on their Telegram account to provide them with VPN or Citrix access.
- Encouraging insiders to share access with them is brazen, but it’s not actually new. Attackers always look for low hanging fruit (aka weakest links), and there’s no weaker link than a disgruntled employee providing direct access.Â
- Lapsus$ seems to be targeting production data and source code in most breaches.
- Lapsus$ claimed they are predominantly interested in Okta’s customers, which we interpret as attempting to tamper with Okta’s code in order to execute a software supply chain attack.
- The SDLC has many entry points, and lateral movement across the SDLC is easier than ever.
First, for Okta Customers:
Okta is a great software development organization. They take security extremely seriously, and we hope that this incident is minimal. However, until we learn more, Okta customers should immediately:
- Turn off all features that enable Okta support to access their instances.
- Look for increased privileges in Okta logs that might suggest an insider threat or a compromised account.
- Harden authentication by enforcing MFA (if you haven’t already) and enforcing IP range restrictions, then force all users to log back in and challenge them for second factors. Lastly, refresh persistent authentication tokens like Oauth. Â
- Okta has a strong history of transparency with their customers (see their response to Heartbleed) so be on the lookout for direct communication and any further guidance on mitigation steps from Okta’s customer success team.
For Everyone Else:
- Stay humble. The Lapsus$ team is extremely skilled. Their victims are strong development teams. This can happen to any organization, so it’s important to prioritize action now to reduce your organization’s risk.
- Audit the basics. Even great teams can make basic mistakes. For example, Nissan failed to change default passwords on their Bitbucket server. An audit of the basics should include:Â
- Checking all systems for default passwordsÂ
- Confirming all former employees have access revoked to all systems
- Creating an asset inventory of who has access to what across your SDLC
- Auditing Infrastructure as Code for private resources that are publicly accessible
- Audit for production drift and ensure that no private resources are publicly accessible
- Harden authentication. Start by logging all of your users out, and revoke persistent tokens like Oauth. If you aren’t using MFA, turn it on. Google Authenticator is free and is easily integrated with most DevOps tools and infrastructure. Also leverage IP range restrictions and IP whitelisting. For high security environments, consider a hard token, reducing inactive logout times and increasing the frequency of challenging for second factors. Since we know that Lapsus$ is recruiting insiders, it is especially important to consider steps that can reduce insider risk. Thankfully most of these steps also reduce overall security risks and enhance security postures.
- Audit for excess privileges. Stopping insiders is very difficult so it’s important to establish least privilege policies to minimize their radius and ease of lateral movement. However, a least privilege policy is only as good as its last audit for excess privileges, so now is the time to ask yourself when was your last excess privilege audit? For most organizations, the honest answer is that privileges were set when the account was provisioned and never reviewed again. Now is the time to get caught up.
- Eliminate hardcoded secrets. Just as removing excess privileges is a key part of reducing insider risk, so is eliminating hardcoded secrets. Hardcoded secrets roll out the welcome mat to attackers to access other valuable resources. Not only should hardcoded secrets be removed from repositories, build logs, containers, K8s, your members’ personal repos and more, but once removed, authentication-related secrets should be revoked and recycled.Â
- Monitor for anomalous user behavior. One of the most important behaviors to track is cloning repositories. While there are legitimate reasons for developers to clone repositories, excessive cloning—and especially downloading repositories—should be investigated in combination with other risk factors. For example, a heightened level of scrutiny should be placed on team members with the greatest risks such as those leaving the company in the next 30 days or those with recent disciplinary action. And remember that members’ personal repositories are extensions of the corporate software supply chain. Both Uber and AWS had incidents in which proprietary code wound up in members’ personal accounts and wound up exposing admin credentials to production systems. Â
- Monitor Critical Code. Some types of code, such as Infrastructure as Code, branch protection rules, security build rules, etc. should be monitored for any and all changes. One of the easiest ways for an insider to exfiltrate data is to change the accessibility of production resources from private to public, which can be done by altering Kubernetes or Terraform settings. Since Infrastructure as Code is relatively few lines of code and changes only infrequently, all changes should be alerted upon to ensure that all changes are deliberate and widely known by security teams. The same is true for branch protection rules and security build rules that may be configured via YAML files.
For Organizations with Access to Their Customers’ Environments
When Lapsus$ said their focus was primarily on Okta’s customers, we suspect the group was hoping to turn Okta into a vector to attack Okta’s own customers, much like SolarWinds. Organizations that deploy agents, have hardware or software on customer’s networks, or provide access to their customers resources should take the extra step to validate integrity across each phase of their software development life cycle. This reduces the risk of code tampering. Two key handoffs to validate include:
- Code to Build. Developers should sign commits to provide a validation mechanism for the build.Â
- Build to Cloud. The provenance of artifacts should be monitored to ensure that, for example, all production containers originated from an approved registry. Â
Carpe Diem!
While we cannot know who Lapsus$ is targeting next, we can all take this as an opportunity to drive adoption of security tools and processes. That’s the silver lining. The steps above represent good governance and basic application security principles. Yet organizations deprioritize these simple steps because of the misperception that they reduce feature release velocity. It doesn’t have to be that way. Savvy security professionals will apply the same principles that have made DevOps so successful to their security initiatives to reduce risk without harming developers’ productivity.
Originally published: March 22, 2022