I've compiled a package of security practices that can be applied iteratively in your DevOps team, and promised to share more material around this. But before we dive into the entire 'DevOps eight', I'd like to share four basic steps towards better security.
The following four items are small, yet important steps to get started with developer-first security. All of them are quite easy to get started with on a basic level.
Developers know the code better than security professionals, and visualizing the data flow auguments the collaboration between developers and security. I think this is something we should utilize more to achieve better security long-term. If you represent security and want to try contextual hands-on security training, I recommend drawing data flow diagrams from a use case perspective. Draw.io is a great tool to get started with. Start with the use cases that the team is currently working on. This way of describing the system together is a great way to talk about security risks and discuss potential countermeasures to the threats identified. Don't forget to add the countermeasures you identify together as tickets in the team backlog. :)
A CI/CD pipeline is the heart of a successful DevOps strategy. But it's also a vulnerable spot in terms of cybersecurity. Whatever output that the pipeline produces, will be the code released to customers. An attacker who gained control over parts of the CI/CD pipeline can control the output.
There are at least three potential objectives for an attacker in terms of CI/CD:
All of the above can potentially lead to a severe business impact. This is why it's so important to secure your CI/CD pipeline. Think about what code or scripts that execute in your pipeline. Who's authorized to modify the configuration or code executed? Should the logic be in a separate repository or the same as the application code? How do we handle credentials and other secrets (e.g. code signing keys) that is required for the CI/CD pipeline to work? Remember to ensure that you never store credentials or keys in your source code repo! If you have done that, make sure to rotate the secrets/keys as soon as possible. And don't forget to protect the administration of the CI/CD pipeline itself. Weak passwords or administration over plain-text HTTP are typical examples of weaknesses that an attacker could utilize to gain full control over the CI/CD pipeline.
Assume breach is a good mindset when building sustainable security. Even if you know a lot about security and believe you've taken massive proactive efforts to secure your application, assume that you missed some parts of the puzzle. Nothing is perfect, even if we have a lot to learn from common security mistakes made by developers.
Working from an 'Assume breach' perspective will make you and your team think about abuse cases during design and build. And working actively with logs and monitoring and analyzing the results periodically is a great way to iteratively form the requirements for monitoring. By analyzing the application behaviour in production, you'll learn more about what categories of events you want alarms on immediately. Start with silently gathering log data, analyze the results periodically and build your alarms based on your analysis.
In cloud platforms like AWS, it's relatively easy to set up monitoring and send alarms when anomalies are detected. As an example, let's say that you are working with infrastructure as code and your infrastructure is immutable. You and the other team members decided that there's no admin use case where you actually need to log in to the production environment as an administrator, since all your deployments are done automatically through the CI/CD pipeline. Since an admin login is unexpected, it's crucial to detect and minimize the impact of e.g. leaked credentials.
You are probably familiar with the OWASP Top 10, and that one of the categories of common security problems is A9-Using Components with Known Vulnerabilities. Your application is most likely vulnerable if you don't monitor and continuously update the components you use. This goes for both direct and transitive dependencies.
The monitoring can be done manually (e.g. by subscribing to mailing lists), but I recommend that you try to automate the process. This is known as Software Composition Analysis (SCA). The solution can look a bit different depending on what technology stack you use, but here are a few examples of tools to use:
The examples above all have in common that they look for known vulnerabilities. It's also important to understand the maturity of your open source components in terms of maintenance and security awareness. How will you handle if there's a vulnerability discovered in one of your dependencies that is no longer maintained?
Many of the free tools rely on the CVEs registered in the National Vulnerability Database (NVD). Unfortunately this is not a high quality datasource, and there are a lot of vulnerabilities in open source components that never get a CVE registered. Depending on your budget and business goals, consider one of the commercial alternatives like VulnDB or BlackDuck which aggregate both CVE data and security vulnerability information from other sources.
And like with any other automated security - remember to begin with gathering data about typical findings rather than failing the build, while learning more about how the tool works on your code base. Think about what your policy should be. Do you want to fail on all high and critical findings? Even if there's no patch available that fixes the issue?
If you want to learn more or share your experience in security practices as part of development, contact me at email@example.com. Looking for more stuff to read? Check out the links below!