Hardcoded secrets have been the gateway into – and the target of – several high-profile security breaches in recent years. According to IBM’s Cost of a Data Breach Report 2023, stolen or compromised credentials were the most common cause of a data breach in 2022 and the second most common cause in 2023. Breaches involving secrets take longer to detect, allowing attackers more time to move laterally between systems and cause more damage to your organization.
Organizations have increased awareness that hardcoding secrets must be stopped as part of their risk management efforts, and several technologies, such as secrets scanners, specifically target this problem. With the advent of Generative AI, however, the problem of hardcoding secrets has scaled to a whole level.
Generative AI and Engineering
Generative AI and Large Language Models (LLMs) are the next big thing in tech development and productivity. They have already profoundly transformed the way we do business. Engineering teams are using these tools to code, performing many daily functions:
- Optimizing code
- Documenting and commenting
- Writing readme files
- Creating tests for code
- Generating efficient algorithms
- Fixing bugs
- Explaining existing code
Using Generative-AI to code creates new issues for security professionals to resolve. The truth is humans are still better at writing code than Gen-AI currently is. In fact, some studies suggest that Gen-AI models have become less accurate over time. There is a risk associated with using Generative AI to code, and this risk becomes even more obvious when considering hardcoded secrets.
Hardcoded Secrets in AI-Generated Code
Generative AI models are designed to understand context and generate code that aligns with best practices, including secure coding techniques. However, they may occasionally produce code with hardcoded secrets because of the data used in training the model.
If training data includes code snippets with hardcoded secrets, there is a greater likelihood that the model will replicate secrets in code. The AI model doesn’t inherently understand the concept of “secrets” or their security implications. It simply mimics the patterns it has observed during training. This means that engineering and security teams must be vigilant when reviewing AI-generated code.
Mitigating the risk posed by secrets and handling secrets correctly is paramount. Treating AI-generated code just like human generated code – subjecting it to rigorous security scanning and testing – is vital to the security of your application.
Your engineers are already using these productivity tools. To ensure your application is secure, you need to address the risk of hardcoded secrets now before they are exposed and your organization suffers a breach.
Visit Cycode at Fal.con 2023 to Learn More
Want to learn five strategies for eliminating the risk of hardcoded secrets in AI-Generated code?
Cycode will be presenting The Risk of Hardcoded Secrets in AI-Generated Code at Fal.con 2023 on Tuesday, September 19, 2023 at 4pm at the Partner Theater – The Hub (Promenade South Level).
The presentation covers:
- How Artificial intelligence and Language Learning Models (LLMs) have paved the way for groundbreaking advancements in code generation
- How this innovation comes with the risk of models generating code with hardcoded secrets, such as API keys or database credentials.
- How you can mitigate these risks using five strategies for securing your code
Come see us speak or stop by booth #1006 to learn more about how we can help you stope hardcoded secrets so you can gain peace of mind from code to cloud.
Originally published: September 18, 2023